Files
pytest2/extra/get_issues.py
Pierre Sassoulas 4588653b24 Migrate from autoflake, black, isort, pyupgrade, flake8 and pydocstyle, to ruff
ruff is faster and handle everything we had prior.

isort configuration done based on the indication from
https://github.com/astral-sh/ruff/issues/4670, previousely based on
reorder-python-import (#11896)

flake8-docstrings was a wrapper around pydocstyle (now archived) that
explicitly asks to use ruff in https://github.com/PyCQA/pydocstyle/pull/658.

flake8-typing-import is useful mainly for project that support python 3.7
and the one useful check will be implemented in https://github.com/astral-sh/ruff/issues/2302

We need to keep blacken-doc because ruff does not handle detection
of python code inside .md and .rst. The direct link to the repo is
now used to avoid a redirection.

Manual fixes:
- Lines that became too long
- % formatting that was not done automatically
- type: ignore that were moved around
- noqa of hard to fix issues (UP031 generally)
- fmt: off and fmt: on that is not really identical
  between black and ruff
- autofix re-order in pre-commit from faster to slower

Co-authored-by: Ran Benita <ran@unusedvar.com>
2024-02-02 09:27:00 +01:00

87 lines
2.3 KiB
Python

import json
from pathlib import Path
import requests
issues_url = "https://api.github.com/repos/pytest-dev/pytest/issues"
def get_issues():
issues = []
url = issues_url
while 1:
get_data = {"state": "all"}
r = requests.get(url, params=get_data)
data = r.json()
if r.status_code == 403:
# API request limit exceeded
print(data["message"])
exit(1)
issues.extend(data)
# Look for next page
links = requests.utils.parse_header_links(r.headers["Link"])
another_page = False
for link in links:
if link["rel"] == "next":
url = link["url"]
another_page = True
if not another_page:
return issues
def main(args):
cachefile = Path(args.cache)
if not cachefile.exists() or args.refresh:
issues = get_issues()
cachefile.write_text(json.dumps(issues), "utf-8")
else:
issues = json.loads(cachefile.read_text("utf-8"))
open_issues = [x for x in issues if x["state"] == "open"]
open_issues.sort(key=lambda x: x["number"])
report(open_issues)
def _get_kind(issue):
labels = [label["name"] for label in issue["labels"]]
for key in ("bug", "enhancement", "proposal"):
if key in labels:
return key
return "issue"
def report(issues):
for issue in issues:
title = issue["title"]
# body = issue["body"]
kind = _get_kind(issue)
status = issue["state"]
number = issue["number"]
link = "https://github.com/pytest-dev/pytest/issues/%s/" % number
print("----")
print(status, kind, link)
print(title)
# print()
# lines = body.split("\n")
# print("\n".join(lines[:3]))
# if len(lines) > 3 or len(body) > 240:
# print("...")
print("\n\nFound %s open issues" % len(issues))
if __name__ == "__main__":
import argparse
parser = argparse.ArgumentParser("process bitbucket issues")
parser.add_argument(
"--refresh", action="store_true", help="invalidate cache, refresh issues"
)
parser.add_argument(
"--cache", action="store", default="issues.json", help="cache file"
)
args = parser.parse_args()
main(args)