[FFmpeg-cvslog] [ffmpeg-patchwork_jobs_devops] UNNAMED PROJECT branch main created. fe7c654 Initial commit

ffmpeg-git at ffmpeg.org ffmpeg-git at ffmpeg.org
Wed May 28 20:32:03 EEST 2025


The branch, main has been created
        at  fe7c654eb72a62cd50f28e01f454cd3798d5e261 (commit)

- Log -----------------------------------------------------------------
commit fe7c654eb72a62cd50f28e01f454cd3798d5e261
Author:     softworkz <softworkz at hotmail.com>
AuthorDate: Wed May 28 19:31:27 2025 +0200
Commit:     softworkz <softworkz at hotmail.com>
CommitDate: Wed May 28 19:31:27 2025 +0200

    Initial commit

diff --git a/.gitignore b/.gitignore
new file mode 100644
index 0000000..2a61601
--- /dev/null
+++ b/.gitignore
@@ -0,0 +1,2 @@
+/__pycache__/
+patchwork.db
diff --git a/COPYING.GPLv2 b/COPYING.GPLv2
new file mode 100644
index 0000000..15f75cf
--- /dev/null
+++ b/COPYING.GPLv2
@@ -0,0 +1,339 @@
+GNU GENERAL PUBLIC LICENSE
+                       Version 2, June 1991
+
+ Copyright (C) 1989, 1991 Free Software Foundation, Inc.,
+ 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
+ Everyone is permitted to copy and distribute verbatim copies
+ of this license document, but changing it is not allowed.
+
+                            Preamble
+
+  The licenses for most software are designed to take away your
+freedom to share and change it.  By contrast, the GNU General Public
+License is intended to guarantee your freedom to share and change free
+software--to make sure the software is free for all its users.  This
+General Public License applies to most of the Free Software
+Foundation's software and to any other program whose authors commit to
+using it.  (Some other Free Software Foundation software is covered by
+the GNU Lesser General Public License instead.)  You can apply it to
+your programs, too.
+
+  When we speak of free software, we are referring to freedom, not
+price.  Our General Public Licenses are designed to make sure that you
+have the freedom to distribute copies of free software (and charge for
+this service if you wish), that you receive source code or can get it
+if you want it, that you can change the software or use pieces of it
+in new free programs; and that you know you can do these things.
+
+  To protect your rights, we need to make restrictions that forbid
+anyone to deny you these rights or to ask you to surrender the rights.
+These restrictions translate to certain responsibilities for you if you
+distribute copies of the software, or if you modify it.
+
+  For example, if you distribute copies of such a program, whether
+gratis or for a fee, you must give the recipients all the rights that
+you have.  You must make sure that they, too, receive or can get the
+source code.  And you must show them these terms so they know their
+rights.
+
+  We protect your rights with two steps: (1) copyright the software, and
+(2) offer you this license which gives you legal permission to copy,
+distribute and/or modify the software.
+
+  Also, for each author's protection and ours, we want to make certain
+that everyone understands that there is no warranty for this free
+software.  If the software is modified by someone else and passed on, we
+want its recipients to know that what they have is not the original, so
+that any problems introduced by others will not reflect on the original
+authors' reputations.
+
+  Finally, any free program is threatened constantly by software
+patents.  We wish to avoid the danger that redistributors of a free
+program will individually obtain patent licenses, in effect making the
+program proprietary.  To prevent this, we have made it clear that any
+patent must be licensed for everyone's free use or not licensed at all.
+
+  The precise terms and conditions for copying, distribution and
+modification follow.
+
+                    GNU GENERAL PUBLIC LICENSE
+   TERMS AND CONDITIONS FOR COPYING, DISTRIBUTION AND MODIFICATION
+
+  0. This License applies to any program or other work which contains
+a notice placed by the copyright holder saying it may be distributed
+under the terms of this General Public License.  The "Program", below,
+refers to any such program or work, and a "work based on the Program"
+means either the Program or any derivative work under copyright law:
+that is to say, a work containing the Program or a portion of it,
+either verbatim or with modifications and/or translated into another
+language.  (Hereinafter, translation is included without limitation in
+the term "modification".)  Each licensee is addressed as "you".
+
+Activities other than copying, distribution and modification are not
+covered by this License; they are outside its scope.  The act of
+running the Program is not restricted, and the output from the Program
+is covered only if its contents constitute a work based on the
+Program (independent of having been made by running the Program).
+Whether that is true depends on what the Program does.
+
+  1. You may copy and distribute verbatim copies of the Program's
+source code as you receive it, in any medium, provided that you
+conspicuously and appropriately publish on each copy an appropriate
+copyright notice and disclaimer of warranty; keep intact all the
+notices that refer to this License and to the absence of any warranty;
+and give any other recipients of the Program a copy of this License
+along with the Program.
+
+You may charge a fee for the physical act of transferring a copy, and
+you may at your option offer warranty protection in exchange for a fee.
+
+  2. You may modify your copy or copies of the Program or any portion
+of it, thus forming a work based on the Program, and copy and
+distribute such modifications or work under the terms of Section 1
+above, provided that you also meet all of these conditions:
+
+    a) You must cause the modified files to carry prominent notices
+    stating that you changed the files and the date of any change.
+
+    b) You must cause any work that you distribute or publish, that in
+    whole or in part contains or is derived from the Program or any
+    part thereof, to be licensed as a whole at no charge to all third
+    parties under the terms of this License.
+
+    c) If the modified program normally reads commands interactively
+    when run, you must cause it, when started running for such
+    interactive use in the most ordinary way, to print or display an
+    announcement including an appropriate copyright notice and a
+    notice that there is no warranty (or else, saying that you provide
+    a warranty) and that users may redistribute the program under
+    these conditions, and telling the user how to view a copy of this
+    License.  (Exception: if the Program itself is interactive but
+    does not normally print such an announcement, your work based on
+    the Program is not required to print an announcement.)
+
+These requirements apply to the modified work as a whole.  If
+identifiable sections of that work are not derived from the Program,
+and can be reasonably considered independent and separate works in
+themselves, then this License, and its terms, do not apply to those
+sections when you distribute them as separate works.  But when you
+distribute the same sections as part of a whole which is a work based
+on the Program, the distribution of the whole must be on the terms of
+this License, whose permissions for other licensees extend to the
+entire whole, and thus to each and every part regardless of who wrote it.
+
+Thus, it is not the intent of this section to claim rights or contest
+your rights to work written entirely by you; rather, the intent is to
+exercise the right to control the distribution of derivative or
+collective works based on the Program.
+
+In addition, mere aggregation of another work not based on the Program
+with the Program (or with a work based on the Program) on a volume of
+a storage or distribution medium does not bring the other work under
+the scope of this License.
+
+  3. You may copy and distribute the Program (or a work based on it,
+under Section 2) in object code or executable form under the terms of
+Sections 1 and 2 above provided that you also do one of the following:
+
+    a) Accompany it with the complete corresponding machine-readable
+    source code, which must be distributed under the terms of Sections
+    1 and 2 above on a medium customarily used for software interchange; or,
+
+    b) Accompany it with a written offer, valid for at least three
+    years, to give any third party, for a charge no more than your
+    cost of physically performing source distribution, a complete
+    machine-readable copy of the corresponding source code, to be
+    distributed under the terms of Sections 1 and 2 above on a medium
+    customarily used for software interchange; or,
+
+    c) Accompany it with the information you received as to the offer
+    to distribute corresponding source code.  (This alternative is
+    allowed only for noncommercial distribution and only if you
+    received the program in object code or executable form with such
+    an offer, in accord with Subsection b above.)
+
+The source code for a work means the preferred form of the work for
+making modifications to it.  For an executable work, complete source
+code means all the source code for all modules it contains, plus any
+associated interface definition files, plus the scripts used to
+control compilation and installation of the executable.  However, as a
+special exception, the source code distributed need not include
+anything that is normally distributed (in either source or binary
+form) with the major components (compiler, kernel, and so on) of the
+operating system on which the executable runs, unless that component
+itself accompanies the executable.
+
+If distribution of executable or object code is made by offering
+access to copy from a designated place, then offering equivalent
+access to copy the source code from the same place counts as
+distribution of the source code, even though third parties are not
+compelled to copy the source along with the object code.
+
+  4. You may not copy, modify, sublicense, or distribute the Program
+except as expressly provided under this License.  Any attempt
+otherwise to copy, modify, sublicense or distribute the Program is
+void, and will automatically terminate your rights under this License.
+However, parties who have received copies, or rights, from you under
+this License will not have their licenses terminated so long as such
+parties remain in full compliance.
+
+  5. You are not required to accept this License, since you have not
+signed it.  However, nothing else grants you permission to modify or
+distribute the Program or its derivative works.  These actions are
+prohibited by law if you do not accept this License.  Therefore, by
+modifying or distributing the Program (or any work based on the
+Program), you indicate your acceptance of this License to do so, and
+all its terms and conditions for copying, distributing or modifying
+the Program or works based on it.
+
+  6. Each time you redistribute the Program (or any work based on the
+Program), the recipient automatically receives a license from the
+original licensor to copy, distribute or modify the Program subject to
+these terms and conditions.  You may not impose any further
+restrictions on the recipients' exercise of the rights granted herein.
+You are not responsible for enforcing compliance by third parties to
+this License.
+
+  7. If, as a consequence of a court judgment or allegation of patent
+infringement or for any other reason (not limited to patent issues),
+conditions are imposed on you (whether by court order, agreement or
+otherwise) that contradict the conditions of this License, they do not
+excuse you from the conditions of this License.  If you cannot
+distribute so as to satisfy simultaneously your obligations under this
+License and any other pertinent obligations, then as a consequence you
+may not distribute the Program at all.  For example, if a patent
+license would not permit royalty-free redistribution of the Program by
+all those who receive copies directly or indirectly through you, then
+the only way you could satisfy both it and this License would be to
+refrain entirely from distribution of the Program.
+
+If any portion of this section is held invalid or unenforceable under
+any particular circumstance, the balance of the section is intended to
+apply and the section as a whole is intended to apply in other
+circumstances.
+
+It is not the purpose of this section to induce you to infringe any
+patents or other property right claims or to contest validity of any
+such claims; this section has the sole purpose of protecting the
+integrity of the free software distribution system, which is
+implemented by public license practices.  Many people have made
+generous contributions to the wide range of software distributed
+through that system in reliance on consistent application of that
+system; it is up to the author/donor to decide if he or she is willing
+to distribute software through any other system and a licensee cannot
+impose that choice.
+
+This section is intended to make thoroughly clear what is believed to
+be a consequence of the rest of this License.
+
+  8. If the distribution and/or use of the Program is restricted in
+certain countries either by patents or by copyrighted interfaces, the
+original copyright holder who places the Program under this License
+may add an explicit geographical distribution limitation excluding
+those countries, so that distribution is permitted only in or among
+countries not thus excluded.  In such case, this License incorporates
+the limitation as if written in the body of this License.
+
+  9. The Free Software Foundation may publish revised and/or new versions
+of the General Public License from time to time.  Such new versions will
+be similar in spirit to the present version, but may differ in detail to
+address new problems or concerns.
+
+Each version is given a distinguishing version number.  If the Program
+specifies a version number of this License which applies to it and "any
+later version", you have the option of following the terms and conditions
+either of that version or of any later version published by the Free
+Software Foundation.  If the Program does not specify a version number of
+this License, you may choose any version ever published by the Free Software
+Foundation.
+
+  10. If you wish to incorporate parts of the Program into other free
+programs whose distribution conditions are different, write to the author
+to ask for permission.  For software which is copyrighted by the Free
+Software Foundation, write to the Free Software Foundation; we sometimes
+make exceptions for this.  Our decision will be guided by the two goals
+of preserving the free status of all derivatives of our free software and
+of promoting the sharing and reuse of software generally.
+
+                            NO WARRANTY
+
+  11. BECAUSE THE PROGRAM IS LICENSED FREE OF CHARGE, THERE IS NO WARRANTY
+FOR THE PROGRAM, TO THE EXTENT PERMITTED BY APPLICABLE LAW.  EXCEPT WHEN
+OTHERWISE STATED IN WRITING THE COPYRIGHT HOLDERS AND/OR OTHER PARTIES
+PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESSED
+OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
+MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE.  THE ENTIRE RISK AS
+TO THE QUALITY AND PERFORMANCE OF THE PROGRAM IS WITH YOU.  SHOULD THE
+PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF ALL NECESSARY SERVICING,
+REPAIR OR CORRECTION.
+
+  12. IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING
+WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MAY MODIFY AND/OR
+REDISTRIBUTE THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES,
+INCLUDING ANY GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING
+OUT OF THE USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED
+TO LOSS OF DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY
+YOU OR THIRD PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER
+PROGRAMS), EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE
+POSSIBILITY OF SUCH DAMAGES.
+
+                     END OF TERMS AND CONDITIONS
+
+            How to Apply These Terms to Your New Programs
+
+  If you develop a new program, and you want it to be of the greatest
+possible use to the public, the best way to achieve this is to make it
+free software which everyone can redistribute and change under these terms.
+
+  To do so, attach the following notices to the program.  It is safest
+to attach them to the start of each source file to most effectively
+convey the exclusion of warranty; and each file should have at least
+the "copyright" line and a pointer to where the full notice is found.
+
+    <one line to give the program's name and a brief idea of what it does.>
+    Copyright (C) <year>  <name of author>
+
+    This program is free software; you can redistribute it and/or modify
+    it under the terms of the GNU General Public License as published by
+    the Free Software Foundation; either version 2 of the License, or
+    (at your option) any later version.
+
+    This program is distributed in the hope that it will be useful,
+    but WITHOUT ANY WARRANTY; without even the implied warranty of
+    MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+    GNU General Public License for more details.
+
+    You should have received a copy of the GNU General Public License along
+    with this program; if not, write to the Free Software Foundation, Inc.,
+    51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
+
+Also add information on how to contact you by electronic and paper mail.
+
+If the program is interactive, make it output a short notice like this
+when it starts in an interactive mode:
+
+    Gnomovision version 69, Copyright (C) year name of author
+    Gnomovision comes with ABSOLUTELY NO WARRANTY; for details type `show w'.
+    This is free software, and you are welcome to redistribute it
+    under certain conditions; type `show c' for details.
+
+The hypothetical commands `show w' and `show c' should show the appropriate
+parts of the General Public License.  Of course, the commands you use may
+be called something other than `show w' and `show c'; they could even be
+mouse-clicks or menu items--whatever suits your program.
+
+You should also get your employer (if you work as a programmer) or your
+school, if any, to sign a "copyright disclaimer" for the program, if
+necessary.  Here is a sample; alter the names:
+
+  Yoyodyne, Inc., hereby disclaims all copyright interest in the program
+  `Gnomovision' (which makes passes at compilers) written by James Hacker.
+
+  <signature of Ty Coon>, 1 April 1989
+  Ty Coon, President of Vice
+
+This General Public License does not permit incorporating your program into
+proprietary programs.  If your program is a subroutine library, you may
+consider it more useful to permit linking proprietary applications with the
+library.  If this is what you want to do, use the GNU Lesser General
+Public License instead of this License.
diff --git a/README.md b/README.md
new file mode 100644
index 0000000..59fccac
--- /dev/null
+++ b/README.md
@@ -0,0 +1,136 @@
+# FFmpeg Patchwork CI Monitor for Azure DevOps
+
+This project adapts the FFmpeg Patchwork CI system to use Azure DevOps pipelines for running CI builds. 
+Instead of running builds locally using Docker containers, it triggers builds on Azure DevOps pipelines 
+and reports the results back to Patchwork.
+
+## Architecture
+
+The system consists of the following components:
+
+1. **Monitor Service**: A Python service that runs on a Linux VM and monitors Patchwork for new patches.
+2. **Azure DevOps Pipelines**: CI pipelines that build FFmpeg and run tests.
+3. **SQLite Database**: Local database for tracking builds and results.
+
+## Prerequisites
+
+- Python 3.6 or newer
+- A Patchwork API token with write permissions
+- An Azure DevOps Personal Access Token with build permissions
+- Azure DevOps pipeline(s) configured for x86 (and optionally PPC) builds
+
+## Setup Instructions
+
+### 1. Create a Classic Pipeline in Azure DevOps
+
+1. Create a new pipeline in Azure DevOps using the Classic Editor
+2. Configure the pipeline with the following variables (mark them as "Settable at queue time"):
+   - `patchUrl` - URL of the patch to download
+   - `patchSeriesId` - ID of the patch series
+   - `checkUrl` - URL for reporting check results back to Patchwork
+   - `jobName` - Name of the job (e.g., x86, ppc)
+3. Add a secret variable `PATCHWORK_TOKEN` for authenticating with Patchwork
+4. Add build steps for:
+   - Downloading the patch
+   - Applying the patch to FFmpeg
+   - Configuring and building FFmpeg
+   - Running FATE tests
+   - Reporting results back to Patchwork
+
+### 2. Install the Monitor Service
+
+#### Automated Installation (Linux)
+
+```bash
+# Clone the repository
+git clone https://your-repo-url/ffmpeg-patchwork-ci.git
+cd ffmpeg-patchwork-ci
+
+# Install with deployment script
+sudo ./deploy_to_linux.sh
+
+# Configure
+sudo nano /etc/patchwork-ci/config.env
+
+# Start the service
+sudo systemctl start patchwork-ci
+sudo systemctl enable patchwork-ci
+```
+
+#### Manual Installation
+
+```bash
+# Install dependencies
+pip install requests python-dateutil pysocks
+
+# Configure environment variables
+export PATCHWORK_HOST="patchwork.ffmpeg.org"
+export PATCHWORK_TOKEN="your-patchwork-token"
+export AZURE_DEVOPS_ORG="your-org"
+export AZURE_DEVOPS_PROJECT="your-project"
+export AZURE_DEVOPS_PAT="your-azure-pat"
+
+# Run the monitor
+python run_patchwork_monitor.py --x86-pipeline-id 14
+```
+
+## Command Line Options
+
+### Run Monitor
+
+```
+python run_patchwork_monitor.py [OPTIONS]
+
+Options:
+  --x86-pipeline-id INT      Azure DevOps pipeline ID for x86 builds
+  --ppc-pipeline-id INT      Azure DevOps pipeline ID for PPC builds (0 to disable)
+  --db-path PATH             Path to SQLite database file
+  --patchwork-host HOST      Patchwork host (e.g., patchwork.ffmpeg.org)
+  --patchwork-token TOKEN    Patchwork API token
+  --azure-org ORG            Azure DevOps organization
+  --azure-project PROJECT    Azure DevOps project
+  --azure-pat PAT            Azure DevOps Personal Access Token
+```
+
+## Utility Scripts
+
+### Test Azure DevOps Pipeline Triggering
+
+```bash
+# Set environment variables
+export AZURE_DEVOPS_ORG="your-org"
+export AZURE_DEVOPS_PROJECT="your-project"
+export AZURE_DEVOPS_PAT="your-pat"
+
+# Run the test
+python test_azure_trigger.py
+```
+
+### Test Patchwork API Permissions
+
+```bash
+# Basic token testing
+python test_patchwork_permissions.py --token YOUR_PATCHWORK_TOKEN
+
+# Complete testing including check creation (requires patch ID)
+python test_patchwork_permissions.py --token YOUR_PATCHWORK_TOKEN --patch-id 12345
+
+# With custom host
+python test_patchwork_permissions.py --token YOUR_PATCHWORK_TOKEN --host custom.patchwork.host
+```
+
+### Check Dependencies
+
+```bash
+python check_deps.py
+```
+
+## Troubleshooting
+
+- **Azure DevOps Pipeline Not Triggering**: Check the Azure PAT permissions and ensure the pipeline ID is correct
+- **Patchwork API Errors**: Use `test_patchwork_permissions.py` to check if your token has the necessary permissions
+- **Monitor Not Finding Patches**: Check that the Patchwork host and token are configured correctly
+
+## License
+
+This project is licensed under the GPLv2 License - see the LICENSE file for details.
diff --git a/check_deps.py b/check_deps.py
new file mode 100644
index 0000000..56f823f
--- /dev/null
+++ b/check_deps.py
@@ -0,0 +1,87 @@
+#!/usr/bin/env python3
+"""
+Check dependencies for FFmpeg Patchwork CI Monitor
+
+This script checks if all required Python packages are installed
+and helps troubleshoot common dependency issues.
+"""
+
+import importlib
+import sys
+import subprocess
+
+REQUIRED_PACKAGES = [
+    # Core requirements
+    ("requests", "pip install requests"),
+    ("dateutil", "pip install python-dateutil"),
+    ("sqlite3", "Built-in with Python (no installation needed)"),
+    
+    # No optional dependencies
+]
+
+def check_package(package_name, install_command):
+    """Check if a package is installed and print installation command if not."""
+    try:
+        importlib.import_module(package_name)
+        print(f"✓ {package_name} is installed")
+        return True
+    except ImportError:
+        print(f"✗ {package_name} is NOT installed")
+        print(f"  To install: {install_command}")
+        return False
+
+def check_azure_cli():
+    """Check if Azure CLI is installed (optional but helpful for testing)."""
+    try:
+        result = subprocess.run(
+            ["az", "--version"], 
+            stdout=subprocess.PIPE, 
+            stderr=subprocess.PIPE,
+            text=True
+        )
+        if result.returncode == 0:
+            print("✓ Azure CLI is installed")
+            return True
+        else:
+            print("✗ Azure CLI is NOT installed (optional)")
+            return False
+    except FileNotFoundError:
+        print("✗ Azure CLI is NOT installed (optional)")
+        print("  To install: Visit https://docs.microsoft.com/en-us/cli/azure/install-azure-cli")
+        return False
+
+def check_environment():
+    """Check for a properly configured environment."""
+    # Check Python version
+    python_version = sys.version_info
+    if python_version.major < 3 or (python_version.major == 3 and python_version.minor < 6):
+        print(f"✗ Python version {python_version.major}.{python_version.minor} detected")
+        print("  Python 3.6 or newer is required")
+    else:
+        print(f"✓ Python version {python_version.major}.{python_version.minor} detected")
+    
+    print("\nChecking required packages:")
+    all_required_packages_found = True
+    for package_name, install_command in REQUIRED_PACKAGES:
+        if not check_package(package_name, install_command):
+            all_required_packages_found = False
+    
+    print("\nChecking optional tools:")
+    check_azure_cli()
+    
+    # Print summary
+    print("\nSummary:")
+    if all_required_packages_found:
+        print("✓ All required packages are installed")
+        print("✓ The FFmpeg Patchwork CI Monitor should run correctly")
+    else:
+        print("✗ Some required packages are missing")
+        print("  Please install the missing packages before running the monitor")
+    
+    return all_required_packages_found
+
+if __name__ == "__main__":
+    print("FFmpeg Patchwork CI Monitor Dependencies Check")
+    print("=============================================")
+    
+    check_environment()
diff --git a/deploy_to_linux.sh b/deploy_to_linux.sh
new file mode 100644
index 0000000..643f84b
--- /dev/null
+++ b/deploy_to_linux.sh
@@ -0,0 +1,188 @@
+#!/bin/bash
+# FFmpeg Patchwork CI Monitor - Linux Deployment Script
+# This script sets up the Patchwork CI monitor to run in-place
+# without copying files to system directories
+
+set -e
+
+# Color definitions
+RED='\033[0;31m'
+GREEN='\033[0;32m'
+YELLOW='\033[1;33m'
+NC='\033[0m' # No Color
+
+# Function to print a colored status message
+print_status() {
+    echo -e "${GREEN}==>${NC} $1"
+}
+
+# Function to print an error message
+print_error() {
+    echo -e "${RED}ERROR:${NC} $1"
+}
+
+# Function to print a warning message
+print_warning() {
+    echo -e "${YELLOW}WARNING:${NC} $1"
+}
+
+# Check if running as root for systemd service installation
+if [[ $EUID -ne 0 ]]; then
+   print_warning "Not running as root. Systemd service installation will be skipped."
+   print_warning "Run with sudo if you want to install the systemd service."
+   INSTALL_SERVICE=0
+else
+   INSTALL_SERVICE=1
+fi
+
+# Get the absolute path of the current directory
+INSTALL_DIR=$(readlink -f "$(pwd)")
+print_status "Using directory: $INSTALL_DIR"
+
+# Check for required files
+if [[ ! -f "$INSTALL_DIR/patchwork_runner.py" ]]; then
+    print_error "patchwork_runner.py not found in the current directory"
+    exit 1
+fi
+
+if [[ ! -f "$INSTALL_DIR/run_patchwork_monitor.py" ]]; then
+    print_error "run_patchwork_monitor.py not found in the current directory"
+    exit 1
+fi
+
+if [[ ! -f "$INSTALL_DIR/sqlite_helper.py" ]]; then
+    print_error "sqlite_helper.py not found in the current directory"
+    exit 1
+fi
+
+# Install dependencies
+print_status "Installing required dependencies"
+if [[ $EUID -eq 0 ]]; then
+    apt-get update
+    apt-get install -y python3 python3-pip rsync curl
+    pip3 install requests==2.28.1 python-dateutil==2.8.2
+else
+    print_warning "Not running as root. Please install dependencies manually:"
+    echo "sudo apt-get update"
+    echo "sudo apt-get install -y python3 python3-pip rsync curl"
+    echo "pip3 install requests==2.28.1 python-dateutil==2.8.2"
+fi
+
+# Run the dependency check
+print_status "Verifying dependencies"
+python3 "$INSTALL_DIR/check_deps.py"
+
+# Create local directories
+print_status "Creating local directories"
+mkdir -p "$INSTALL_DIR/logs"
+chmod 755 "$INSTALL_DIR/logs"
+
+# Create configuration file
+print_status "Creating local configuration file"
+cat > "$INSTALL_DIR/config.env" << EOF
+# Patchwork CI Monitor Configuration
+
+# Patchwork API configuration
+PATCHWORK_TOKEN=your_token_here
+PATCHWORK_HOST=patchwork.ffmpeg.org
+
+# Azure DevOps configuration
+AZURE_DEVOPS_ORG=your_org_here
+AZURE_DEVOPS_PROJECT=your_project_here
+AZURE_DEVOPS_PAT=your_pat_here
+
+# Logging configuration
+PATCHWORK_LOG_LEVEL=INFO
+
+# Database path (relative to installation directory)
+PATCHWORK_DB_PATH="$INSTALL_DIR/patchwork.db"
+EOF
+
+chmod 640 "$INSTALL_DIR/config.env"
+
+# Create systemd service
+if [[ $INSTALL_SERVICE -eq 1 ]]; then
+    print_status "Creating systemd service"
+    cat > /etc/systemd/system/patchwork-ci.service << EOF
+[Unit]
+Description=FFmpeg Patchwork CI Monitor
+After=network.target
+
+[Service]
+Type=simple
+# Since we're not copying files, the working directory is important
+WorkingDirectory=$INSTALL_DIR
+# Use full path to python3 and the script
+ExecStart=/usr/bin/python3 $INSTALL_DIR/run_patchwork_monitor.py --config $INSTALL_DIR/config.env
+Restart=always
+RestartSec=60
+# Give the service 10 seconds to start up
+TimeoutStartSec=10
+# Give the service 10 seconds to shut down
+TimeoutStopSec=10
+# Set sensible security options
+ProtectSystem=full
+PrivateTmp=true
+NoNewPrivileges=true
+
+[Install]
+WantedBy=multi-user.target
+EOF
+
+    # Set up log rotation
+    print_status "Setting up log rotation"
+    cat > /etc/logrotate.d/patchwork-ci << EOF
+$INSTALL_DIR/logs/*.log {
+    daily
+    rotate 14
+    compress
+    delaycompress
+    missingok
+    notifempty
+    create 640 root root
+    sharedscripts
+    postrotate
+        systemctl restart patchwork-ci.service >/dev/null 2>&1 || true
+    endscript
+}
+EOF
+
+    chmod 644 /etc/logrotate.d/patchwork-ci
+
+    # Reload systemd
+    systemctl daemon-reload
+    print_status "Systemd service created: patchwork-ci.service"
+fi
+
+# No convenience script needed - run the monitor directly with --config
+
+# Print installation summary and next steps
+print_status "Installation complete!"
+print_status "Please edit $INSTALL_DIR/config.env to set your configuration"
+
+if [[ $INSTALL_SERVICE -eq 1 ]]; then
+    print_status "To start the service: sudo systemctl start patchwork-ci"
+    print_status "To enable automatic startup: sudo systemctl enable patchwork-ci"
+    print_status "To view logs: sudo journalctl -u patchwork-ci -f"
+else
+    print_status "To start manually: python3 run_patchwork_monitor.py --config config.env"
+    print_status "To view logs (redirect output with '>> logs/monitor.log 2>&1'): tail -f logs/monitor.log"
+fi
+
+echo ""
+print_warning "You MUST edit $INSTALL_DIR/config.env before starting"
+echo "Specifically, set the following required values:"
+echo "  - PATCHWORK_TOKEN"
+echo "  - AZURE_DEVOPS_ORG"
+echo "  - AZURE_DEVOPS_PROJECT"
+echo "  - AZURE_DEVOPS_PAT"
+echo ""
+print_status "Note: Pipeline IDs are defined directly in patchwork_runner.py"
+print_status "Multiple pipelines will be triggered for each patch:"
+# echo "  - linux_x64 (Pipeline ID: 14)"  # Currently commented out in code
+echo "  - linux_x64_oot (Pipeline ID: 18)"  # Note: ID is 18, not 16
+echo "  - linux_x64_shared (Pipeline ID: 17)"
+echo "  - win_msvc_x64 (Pipeline ID: 19)"
+echo "  - win_gcc_x64 (Pipeline ID: 21)"
+echo "  - mac_x64 (Pipeline ID: 15)"
+echo "Edit patchwork_runner.py to change these pipeline IDs if needed"
diff --git a/job.py b/job.py
new file mode 100644
index 0000000..20eb4c4
--- /dev/null
+++ b/job.py
@@ -0,0 +1,13 @@
+import subprocess
+import sys
+
+class Job:
+    def __init__(self, name, config):
+        self.name = name
+        
+        # Required settings for Azure DevOps integration
+        self.azure_pipeline_id = config.get("azure_pipeline_id", 0)
+        # These properties are no longer used with Azure DevOps but kept for compatibility
+        self.build_flags = config.get("build_flags", "-j8")
+        self.fate_flags = config.get("fate_flags", "-j8")
+        self.run_full_series = config.get("run_full_series", True)  # Default to True to process all patches
diff --git a/logging_helpers.py b/logging_helpers.py
new file mode 100644
index 0000000..81d40c4
--- /dev/null
+++ b/logging_helpers.py
@@ -0,0 +1,68 @@
+#!/usr/bin/env python3
+"""
+Logging utilities for the FFmpeg Patchwork CI Monitor
+
+This module provides consistent logging functions used across
+the patchwork monitoring scripts.
+"""
+
+import os
+from datetime import datetime
+
+# Log level constants
+LOG_DEBUG = 10
+LOG_INFO = 20
+LOG_WARNING = 30
+LOG_ERROR = 40
+
+# Default log level (INFO)
+current_log_level = LOG_INFO
+
+# Log level names for display
+log_level_names = {
+    LOG_DEBUG: "DEBUG",
+    LOG_INFO: "INFO",
+    LOG_WARNING: "WARNING",
+    LOG_ERROR: "ERROR"
+}
+
+def set_log_level():
+    """Set the log level based on environment variable"""
+    global current_log_level
+    log_level_str = os.environ.get("PATCHWORK_LOG_LEVEL", "INFO").upper()
+    
+    if log_level_str == "DEBUG":
+        current_log_level = LOG_DEBUG
+    elif log_level_str == "INFO":
+        current_log_level = LOG_INFO
+    elif log_level_str == "WARNING":
+        current_log_level = LOG_WARNING
+    elif log_level_str == "ERROR":
+        current_log_level = LOG_ERROR
+    
+    # Use direct print to avoid circular reference
+    timestamp = datetime.now().strftime("%Y-%m-%d %H:%M:%S")
+    # Add flush=True to ensure immediate output regardless of line endings
+    print(f"[{timestamp}] [INFO] Log level set to {log_level_str}", flush=True)
+
+def log_message(message, level=LOG_INFO):
+    """
+    Print a log message with timestamp and level.
+    Ensures proper line ending handling regardless of source file format.
+    
+    Args:
+        message: The message to log
+        level: The log level (default: INFO)
+    """
+    # Only log if the message level is at or above the current log level
+    if level >= current_log_level:
+        timestamp = datetime.now().strftime("%Y-%m-%d %H:%M:%S")
+        level_name = log_level_names.get(level, "INFO")
+        
+        # Normalize message to ensure consistent line endings
+        if isinstance(message, str):
+            # Replace any Windows CRLF with just LF to ensure proper handling
+            message = message.replace('\r\n', '\n')
+        
+        # Add flush=True to ensure immediate output
+        print(f"[{timestamp}] [{level_name}] {message}", flush=True)
diff --git a/patchwork_runner.py b/patchwork_runner.py
new file mode 100644
index 0000000..e046362
--- /dev/null
+++ b/patchwork_runner.py
@@ -0,0 +1,537 @@
+#!/usr/bin/env python3
+"""
+FFmpeg Patchwork CI Monitor for Azure DevOps
+
+This script monitors the Patchwork API for new FFmpeg patches and triggers
+Azure DevOps builds to run CI tests, then reports results back to Patchwork.
+"""
+
+import json
+import os
+import re
+import requests
+import subprocess
+import sys
+import time
+import urllib.parse
+import base64
+
+from datetime import datetime, timezone
+from dateutil.relativedelta import relativedelta
+from job import Job
+from sqlite_helper import SQLiteDatabase
+from logging_helpers import log_message, set_log_level, LOG_DEBUG, LOG_INFO, LOG_WARNING, LOG_ERROR
+
+# Network request timeout in seconds
+REQUEST_TIMEOUT = 30
+
+# Environment variables
+env = os.environ
+
+# Initialize log level from environment
+set_log_level()
+
+# Patchwork configuration
+patchwork_token = env.get("PATCHWORK_TOKEN", "")
+patchwork_host = env.get("PATCHWORK_HOST", "patchwork.ffmpeg.org")
+
+# Database configuration
+db_path = env.get("PATCHWORK_DB_PATH", "patchwork.db")
+
+# Azure DevOps configuration
+azure_org = env.get("AZURE_DEVOPS_ORG", "")
+azure_project = env.get("AZURE_DEVOPS_PROJECT", "")
+azure_pat = env.get("AZURE_DEVOPS_PAT", "")
+
+
+# Setup configuration for all pipelines - defined at module level so it can be imported
+jobs_list = []
+
+# # 14: linux-x64
+# config_linux_x64 = {
+#     "azure_pipeline_id": 14
+# }
+# jobs_list.append(Job("linux_x64", config_linux_x64))
+
+# 16: linux-x64-oot
+config_linux_x64_oot = {
+    "azure_pipeline_id": 18
+}
+jobs_list.append(Job("linux_x64_oot", config_linux_x64_oot))
+
+# 17: linux-x64-shared
+config_linux_x64_shared = {
+    "azure_pipeline_id": 17
+}
+jobs_list.append(Job("linux_x64_shared", config_linux_x64_shared))
+
+# 19: win-msvc-x64
+config_win_msvc_x64 = {
+    "azure_pipeline_id": 19
+}
+jobs_list.append(Job("win_msvc_x64", config_win_msvc_x64))
+
+# 21: win-gcc-x64
+config_win_gcc_x64 = {
+    "azure_pipeline_id": 21
+}
+jobs_list.append(Job("win_gcc_x64", config_win_gcc_x64))
+
+# 15: mac-x64
+config_mac_x64 = {
+    "azure_pipeline_id": 15
+}
+jobs_list.append(Job("mac_x64", config_mac_x64))
+
+def post_check(check_url, type_check, context, msg_short, msg_long):
+    """
+    Post a check result to Patchwork
+    
+    Args:
+        check_url: URL endpoint for posting check results
+        type_check: Status type (success, fail, warning)
+        context: Context for the check (e.g., 'make_x86')
+        msg_short: Short description message
+        msg_long: Detailed message or log
+    """
+    if isinstance(msg_long, bytes):
+        split_char = b'\n'
+        msg_long = msg_long.replace(b'\"', b'')
+        msg_long = msg_long.replace(b';', b'')
+    else:
+        split_char = '\n'
+        msg_long = msg_long.replace('\"', '')
+        msg_long = msg_long.replace(';', '')
+
+    msg_long_split = msg_long.split(split_char)
+    if len(msg_long_split) > 200:
+        msg_long_split = msg_long_split[-200:]
+
+    msg_long = split_char.join(msg_long_split)
+
+    headers = {"Authorization": f"Token {patchwork_token}"}
+    payload = {
+        "state": type_check,
+        "context": context,
+        "description": msg_short,
+        "description_long": msg_long
+    }
+    
+    try:
+        resp = requests.post(check_url, headers=headers, data=payload, timeout=REQUEST_TIMEOUT)
+        print(resp)
+        print(resp.content)
+    except (requests.exceptions.Timeout, requests.exceptions.ConnectionError) as e:
+        log_message(f"Request timeout or connection error when posting check: {str(e)}", LOG_ERROR)
+
+def regex_version_and_commit(subject):
+    """
+    Extract version and commit entry information from a subject line
+    
+    Args:
+        subject: Email subject line to parse
+        
+    Returns:
+        Tuple of (version_num, commit_entry_num, commit_entry_den)
+    """
+    subject_clean_re = re.compile(r'\[[^]]*\]\s+(\[[^]]*\])')
+    version_re = re.compile(r'[vV](\d+)')
+    commit_entry_re = re.compile(r'(\d+)/(\d+)')
+
+    subject_clean_match = subject_clean_re.match(subject)
+    if subject_clean_match is None:
+        return 1, 1, 1
+
+    label = subject_clean_re.match(subject).group(1)
+    version_match = version_re.search(label)
+
+    if version_match is None:
+        version_num = 1
+    else:
+        version_num = int(version_match.group(1))
+
+    commit_entry_match = commit_entry_re.search(label)
+    if commit_entry_match is None:
+        commit_entry_num = 1
+        commit_entry_den = 1
+    else:
+        commit_entry_num = int(commit_entry_match.group(1))
+        commit_entry_den = int(commit_entry_match.group(2))
+
+    return version_num, commit_entry_num, commit_entry_den
+
+def trigger_azure_pipeline(job, patch):
+    """
+    Trigger an Azure DevOps pipeline build
+    
+    Args:
+        job: Job object containing configuration
+        patch: Dictionary with patch information
+        
+    Returns:
+        Dictionary with build information including URL and ID
+    """
+    log_message(f"Starting trigger_azure_pipeline for job {job.name}")
+    pipeline_id = job.azure_pipeline_id
+    
+    if not pipeline_id:
+        log_message(f"No pipeline ID configured for job {job.name}")
+        return None
+    
+    if not azure_pat or not azure_org or not azure_project:
+        log_message("Azure DevOps configuration is incomplete - cannot trigger pipeline")
+        return None
+
+    # Create authorization header with PAT
+    log_message("Preparing Azure DevOps API request")
+    auth = base64.b64encode(f":{azure_pat}".encode()).decode()
+    
+    # Prepare API endpoint - use the queue endpoint for classic pipelines
+    api_url = f"https://dev.azure.com/{azure_org}/{azure_project}/_apis/build/builds?api-version=6.0"
+
+    # Prepare parameters to pass to the pipeline
+    parameters = {
+        "patchUrl": patch["mbox"],
+        "patchSeriesId": str(patch["series_id"]),  # Convert to string for classic pipeline
+        "checkUrl": patch["check_url"],
+        "jobName": job.name,
+        "patchName": patch["subject_email"]
+    }
+    
+    # Prepare request payload for classic pipeline
+    payload = {
+        "definition": {
+            "id": pipeline_id
+        },
+        "parameters": json.dumps(parameters)  # Classic pipelines require parameters as a JSON string
+    }
+    
+    # Set up headers
+    headers = {
+        "Authorization": f"Basic {auth}",
+        "Content-Type": "application/json"
+    }
+    
+    # Send request to Azure DevOps
+    log_message(f"Sending API request to Azure DevOps: POST {api_url}")
+    log_message(f"Pipeline ID: {pipeline_id}, Job: {job.name}")
+    log_message(f"Parameters: {json.dumps(parameters, indent=2)}")
+    
+    try:
+        response = requests.post(api_url, headers=headers, json=payload, timeout=REQUEST_TIMEOUT)
+        log_message(f"Received response from Azure DevOps: {response.status_code}")
+        
+        if response.status_code >= 200 and response.status_code < 300:
+            build_info = response.json()
+            log_message(f"Successfully triggered pipeline: {build_info.get('id')}")
+            log_message(f"Build URL: {build_info.get('_links', {}).get('web', {}).get('href')}")
+            return build_info
+        else:
+            log_message(f"Failed to trigger pipeline: {response.status_code}")
+            log_message(f"Error response: {response.text}")
+            return None
+    except requests.exceptions.Timeout:
+        log_message(f"Timeout when connecting to Azure DevOps API", LOG_ERROR)
+        return None
+    except requests.exceptions.ConnectionError:
+        log_message(f"Connection error when connecting to Azure DevOps API", LOG_ERROR)
+        return None
+    except Exception as e:
+        log_message(f"Exception while triggering pipeline: {str(e)}", LOG_ERROR)
+        return None
+
+def create_database_tables(mydb):
+    """
+    Create necessary database tables if they don't exist
+    
+    Args:
+        mydb: Database connection
+    """
+    log_message("Checking database tables", LOG_DEBUG)
+    
+    tables = {
+        "patch": "(id INTEGER PRIMARY KEY AUTOINCREMENT, msg_id TEXT, subject_email TEXT)",
+        "series": "(id INTEGER PRIMARY KEY AUTOINCREMENT, series_id INTEGER)",
+        "builds": "(id INTEGER PRIMARY KEY AUTOINCREMENT, msg_id TEXT, job_name TEXT, build_id TEXT, "
+                 "status TEXT, series_id INTEGER, started_at TEXT, completed_at TEXT)"
+    }
+    
+    created_tables = []
+    
+    for table_name, schema in tables.items():
+        # Log at DEBUG level for routine checks
+        log_message(f"Checking {table_name} table", LOG_DEBUG)
+        result = mydb.create_missing_table(table_name, schema)
+        
+        # Only log at INFO level if table was actually created
+        if result:
+            created_tables.append(table_name)
+    
+    # Log a summary message at INFO level
+    if created_tables:
+        log_message(f"Created the following tables: {', '.join(created_tables)}")
+    else:
+        log_message("All database tables already exist", LOG_DEBUG)
+
+def fetch_and_process_patches(mydb, jobs_list, time_interval):
+    """
+    Fetch new patches from Patchwork and trigger builds
+    
+    Args:
+        mydb: Database connection
+        jobs_list: List of job configurations
+        time_interval: Time interval in minutes to look back for patches
+        
+    Returns:
+        List of processed patches
+    """
+    log_message(f"Starting fetch_and_process_patches (looking back {time_interval:.2f} minutes)")
+    
+    # Ensure database tables exist before proceeding
+    create_database_tables(mydb)
+    
+    patch_list = []
+
+    headers = {"Authorization": f"Token {patchwork_token}", "Host": patchwork_host}
+
+    # Look back based on the provided time interval
+    utc_time = datetime.utcnow()
+    utc_time = utc_time - relativedelta(minutes=time_interval)
+    str_time = utc_time.strftime("%Y-%m-%dT%H:%M:%S")
+    str_time = urllib.parse.quote(str_time)
+    url_request = f"/api/events/?category=patch-completed&since={str_time}"
+    url = f"https://{patchwork_host}{url_request}"
+
+    # Use DEBUG level for routine API calls to reduce log noise during normal operation
+    log_message(f"Making API request to Patchwork: GET {url}", LOG_DEBUG)
+    try:
+        resp = requests.get(url, headers=headers, timeout=REQUEST_TIMEOUT)
+        log_message(f"Received response: {resp.status_code}", LOG_DEBUG)
+        
+        # Parse API response
+        reply_list = json.loads(resp.content)
+    except requests.exceptions.Timeout:
+        log_message(f"Request timeout when fetching events from Patchwork API", LOG_ERROR)
+        return []
+    except requests.exceptions.ConnectionError:
+        log_message(f"Connection error when fetching events from Patchwork API", LOG_ERROR)
+        return []
+    except Exception as e:
+        log_message(f"Unexpected error when fetching events: {str(e)}", LOG_ERROR)
+        return []
+    
+    # Log result at INFO level if events found, DEBUG if empty (common case)
+    if reply_list:
+        log_message(f"Found {len(reply_list)} events")
+    else:
+        log_message("No events found in API response", LOG_DEBUG)
+    
+    for reply in reply_list:
+        log_message(f"Processing event ID: {reply['id']}", LOG_DEBUG)
+        
+        patch_url = reply["payload"]["patch"]["url"]
+        series_id = reply["payload"]["series"]["id"]
+
+        event_id = reply["id"]
+        msg_id = reply["payload"]["patch"]["msgid"]
+        mbox = reply["payload"]["patch"]["mbox"]
+
+        log_message(f"Fetching patch details from {patch_url}", LOG_DEBUG)
+        try:
+            resp_patch = requests.get(patch_url, headers=headers, timeout=REQUEST_TIMEOUT)
+            log_message(f"Patch details response: {resp_patch.status_code}", LOG_DEBUG)
+            
+            reply_patch = json.loads(resp_patch.content)
+        except requests.exceptions.Timeout:
+            log_message(f"Request timeout when fetching patch details from {patch_url}", LOG_ERROR)
+            continue
+        except requests.exceptions.ConnectionError:
+            log_message(f"Connection error when fetching patch details from {patch_url}", LOG_ERROR)
+            continue
+        except Exception as e:
+            log_message(f"Unexpected error when fetching patch details: {str(e)}", LOG_ERROR)
+            continue
+
+        author_email = reply_patch["submitter"]["email"]
+        subject_email = reply_patch["headers"]["Subject"]
+        subject_email = subject_email.replace("\n", "")
+        subject_email = subject_email.replace('\"', '')
+
+        subject_email = subject_email[:256]
+        msg_id = msg_id[:256]
+
+        check_url = reply_patch["checks"]
+        keys = ["msg_id"]
+        res = mydb.query("patch", keys, f"WHERE msg_id = \"{msg_id}\"")
+        if res:
+            log_message(f"Patch {msg_id} already exists, skipping", LOG_DEBUG)
+            continue
+        
+        log_message(f"Adding patch {msg_id} to database")
+        log_message(f"Patch info - Author: {author_email}, Subject: {subject_email}, Series ID: {series_id}")
+        log_message(f"Patch URLs - Check: {check_url}, Patch: {patch_url}, Mbox: {mbox}")
+
+        mydb.insert("patch", {"msg_id": msg_id, "subject_email": subject_email})
+
+        log_message(f"Adding patch to processing list")
+        patch_list.append({
+            "msg_id": msg_id,
+            "series_id": series_id,
+            "event_id": event_id,
+            "mbox": mbox,
+            "author_email": author_email,
+            "subject_email": subject_email,
+            "check_url": check_url
+        })
+
+        log_message(f"Checking if series {series_id} exists in database")
+        keys = ["series_id"]
+        res = mydb.query("series", keys, f"WHERE series_id = {series_id}")
+        if not res:
+            log_message(f"Adding series {series_id} to database")
+            mydb.insert("series", {"series_id": series_id})
+
+    log_message(f"Number of patches in list: {len(patch_list)}", LOG_DEBUG)
+
+    # Group patches by series_id and sort
+    log_message("Grouping and sorting patches by series_id", LOG_DEBUG)
+    series_patches = {}
+    for patch in patch_list:
+        series_id = patch["series_id"]
+        if series_id not in series_patches:
+            series_patches[series_id] = []
+        series_patches[series_id].append(patch)
+    
+    # Sort each series by commit number in reverse order (last patch first)
+    for series_id, patches in series_patches.items():
+        patches.sort(key=lambda p: regex_version_and_commit(p["subject_email"])[1], reverse=True)
+    
+    log_message(f"Found {len(series_patches)} unique series to process", LOG_DEBUG)
+    
+    # Process each series
+    for series_id, patches in series_patches.items():
+        log_message(f"Processing series {series_id} with {len(patches)} patches (in reverse order)")
+        
+        # Process each patch in the series (last patch first)
+        for patch in patches:
+            # Check commit message through Patchwork API to validate it
+            log_message(f"Fetching check details for patch {patch['msg_id']} from {patch['check_url']}")
+            try:
+                resp_patch = requests.get(f"{patch['check_url']}/", headers=headers, timeout=REQUEST_TIMEOUT)
+                log_message(f"Check details response: {resp_patch.status_code}")
+                patch_data = json.loads(resp_patch.content)
+            except requests.exceptions.Timeout:
+                log_message(f"Request timeout when fetching check details from {patch['check_url']}", LOG_ERROR)
+                continue
+            except requests.exceptions.ConnectionError:
+                log_message(f"Connection error when fetching check details from {patch['check_url']}", LOG_ERROR)
+                continue
+            except Exception as e:
+                log_message(f"Unexpected error when fetching check details: {str(e)}", LOG_ERROR)
+                continue
+            
+            # Commit message check removed as check_commit_message.py no longer exists
+            
+            # Extract commit position for logging
+            _, commit_num, commit_den = regex_version_and_commit(patch["subject_email"])
+            log_message(f"Processing patch {commit_num}/{commit_den} from series {series_id}")
+            
+            # Process all jobs for this patch
+            log_message(f"Processing jobs for patch {patch['msg_id']} (commit {commit_num}/{commit_den})")
+            for job in jobs_list:
+                log_message(f"Evaluating job {job.name} for patch {patch['msg_id']}")
+                
+                # Skip if not processing full series and not the last patch
+                if not job.run_full_series and commit_num != commit_den:
+                    log_message(f"Skipping job {job.name} - not processing full series and not the last patch")
+                    continue
+                
+                # Trigger Azure DevOps build instead of running locally
+                log_message(f"Triggering Azure DevOps pipeline for job {job.name}")
+                build_info = trigger_azure_pipeline(job, patch)
+                
+                if build_info:
+                    log_message(f"Successfully triggered pipeline for job {job.name}, build ID: {build_info.get('id', 'unknown')}")
+                    
+                    # Add entry to track the build
+                    log_message(f"Adding build record to database for job {job.name}")
+                    mydb.insert("builds", {
+                        "msg_id": patch["msg_id"],
+                        "job_name": job.name,
+                        "build_id": build_info.get("id", "unknown"),
+                        "status": "in_progress",
+                        "series_id": patch["series_id"],
+                        "started_at": datetime.now().isoformat()
+                    })
+                    log_message(f"Successfully added build record for job {job.name}")
+                else:
+                    log_message(f"Failed to trigger pipeline for job {job.name}")
+                    
+                    # Post initial status to Patchwork (commented out)
+                    # post_check(
+                    #     patch["check_url"], 
+                    #     "pending", 
+                    #     f"build_{job.name}", 
+                    #     f"Build queued in Azure DevOps (Pipeline {job.azure_pipeline_id})",
+                    #     ""
+                    # )
+                # else:
+                    # Failed to trigger build (commented out)
+                    # post_check(
+                    #     patch["check_url"],
+                    #     "warning",
+                    #     f"build_{job.name}",
+                    #     "Failed to trigger Azure DevOps build",
+                    #     "Check server logs for details."
+                    # )
+
+    return patch_list
+
+if __name__ == "__main__":
+    log_message("Starting FFmpeg Patchwork CI Monitor")
+    log_message(f"Database path: {db_path}")
+    log_message(f"Patchwork host: {patchwork_host}")
+    log_message(f"Number of configured jobs: {len(jobs_list)}")
+    
+    # Local database for storing cached job results
+    log_message("Initializing database connection")
+    mydb = SQLiteDatabase(db_path)
+
+    # Create tables
+    create_database_tables(mydb)
+
+    # In minutes
+    start_time = 0
+    end_time = 0
+    
+    log_message("Entering main monitoring loop")
+    while True:
+        time_interval = (end_time - start_time) / 60 + 10
+        log_message(f"Starting new monitoring cycle (looking back {time_interval:.2f} minutes)")
+        start_time = time.time()
+        
+        try:
+            log_message("Fetching and processing patches")
+            patch_list = fetch_and_process_patches(mydb, jobs_list, time_interval)
+            
+            if not patch_list:
+                log_message("No patches found, sleeping for 1 minute")
+                time.sleep(60*1)
+                log_message("Waking up from sleep")
+            else:
+                log_message(f"Processed {len(patch_list)} patches in this cycle")
+        except Exception as e:
+            log_message(f"ERROR: Exception while processing patches: {str(e)}")
+            import traceback
+            log_message(f"Traceback: {traceback.format_exc()}")
+            log_message("Sleeping for 1 minute after error")
+            time.sleep(60)
+            log_message("Waking up from error sleep")
+            
+        end_time = time.time()
+        cycle_duration = end_time - start_time
+        log_message(f"Monitoring cycle completed in {cycle_duration:.2f} seconds")
+    
+    log_message("Closing database connection")
+    mydb.close()
+    log_message("FFmpeg Patchwork CI Monitor terminated")
diff --git a/run_patchwork_monitor.py b/run_patchwork_monitor.py
new file mode 100644
index 0000000..e6e444a
--- /dev/null
+++ b/run_patchwork_monitor.py
@@ -0,0 +1,221 @@
+#!/usr/bin/env python3
+"""
+FFmpeg Patchwork CI Monitor Runner
+
+This is a helper script that configures and runs the Patchwork CI Monitor
+with proper command-line arguments and environment variable handling.
+"""
+
+import argparse
+import os
+import sys
+import time
+import signal
+from datetime import datetime
+
+# Import shared logging functions
+from logging_helpers import log_message, set_log_level, LOG_DEBUG, LOG_INFO, LOG_WARNING, LOG_ERROR
+
+def load_config_file(config_path):
+    """Load environment variables from a configuration file"""
+    if not os.path.exists(config_path):
+        log_message(f"Config file not found: {config_path}", LOG_ERROR)
+        return False
+        
+    log_message(f"Loading configuration from: {config_path}")
+    
+    try:
+        with open(config_path, 'r') as config_file:
+            for line in config_file:
+                line = line.strip()
+                # Skip comments and empty lines
+                if not line or line.startswith('#'):
+                    continue
+                    
+                # Parse KEY=VALUE format
+                if '=' in line:
+                    key, value = line.split('=', 1)
+                    key = key.strip()
+                    value = value.strip()
+                    
+                    # Remove quotes if present
+                    if (value.startswith('"') and value.endswith('"')) or \
+                       (value.startswith("'") and value.endswith("'")):
+                        value = value[1:-1]
+                        
+                    # Set environment variable
+                    os.environ[key] = value
+                    log_message(f"Set environment variable: {key}", LOG_DEBUG)
+        return True
+    except Exception as e:
+        log_message(f"Error loading config file: {str(e)}", LOG_ERROR)
+        return False
+
+def setup_args():
+    """Parse command line arguments"""
+    log_message("Parsing command line arguments")
+    parser = argparse.ArgumentParser(description="Run the FFmpeg Patchwork CI Monitor")
+    
+    # Config file option (new)
+    parser.add_argument("--config", 
+                        help="Path to configuration file (KEY=VALUE format)")
+    
+    # Database configuration
+    parser.add_argument("--db-path", default=os.environ.get("PATCHWORK_DB_PATH", "patchwork.db"),
+                        help="Path to SQLite database file")
+    
+    # Patchwork configuration
+    parser.add_argument("--patchwork-host", default=os.environ.get("PATCHWORK_HOST"),
+                        help="Patchwork host (e.g., patchwork.ffmpeg.org)")
+    parser.add_argument("--patchwork-token", default=os.environ.get("PATCHWORK_TOKEN"),
+                        help="Patchwork API token")
+    
+    # Azure DevOps configuration
+    parser.add_argument("--azure-org", default=os.environ.get("AZURE_DEVOPS_ORG"),
+                        help="Azure DevOps organization")
+    parser.add_argument("--azure-project", default=os.environ.get("AZURE_DEVOPS_PROJECT"),
+                        help="Azure DevOps project")
+    parser.add_argument("--azure-pat", default=os.environ.get("AZURE_DEVOPS_PAT"),
+                        help="Azure DevOps Personal Access Token")
+    
+    return parser.parse_args()
+
+def validate_args(args):
+    """Validate required arguments and provide helpful error messages"""
+    log_message("Validating command line arguments")
+    missing = []
+    
+    # Check mandatory arguments
+    if not args.patchwork_host:
+        missing.append("--patchwork-host or PATCHWORK_HOST")
+    if not args.patchwork_token:
+        missing.append("--patchwork-token or PATCHWORK_TOKEN")
+    if not args.azure_org:
+        missing.append("--azure-org or AZURE_DEVOPS_ORG")
+    if not args.azure_project:
+        missing.append("--azure-project or AZURE_DEVOPS_PROJECT")
+    if not args.azure_pat:
+        missing.append("--azure-pat or AZURE_DEVOPS_PAT")
+    
+    if missing:
+        log_message("ERROR: Missing required arguments:")
+        for arg in missing:
+            log_message(f"  - {arg}")
+        log_message("Run with --help for more information.")
+        return False
+    
+    log_message("Command line arguments validation successful")
+    return True
+
+def signal_handler(sig, frame):
+    """Handle Ctrl+C and other termination signals"""
+    log_message("Received termination signal. Shutting down gracefully...")
+    sys.exit(0)
+
+def main():
+    """Main function to run the monitor"""
+    # Initialize log level
+    set_log_level()
+    
+    log_message("FFmpeg Patchwork CI Monitor - Starting")
+    
+    # Parse arguments first to get potential config file path
+    args = setup_args()
+    
+    # Load configuration from file if provided
+    if args.config:
+        if not load_config_file(args.config):
+            log_message("Failed to load configuration file, continuing with defaults and CLI arguments")
+    
+    # Re-parse arguments to allow CLI to override values from config file
+    args = setup_args()
+    
+    # Validate arguments
+    if not validate_args(args):
+        sys.exit(1)
+    
+    # Set environment variables from arguments
+    log_message("Setting environment variables from arguments")
+    os.environ["PATCHWORK_HOST"] = args.patchwork_host
+    os.environ["PATCHWORK_TOKEN"] = args.patchwork_token
+    os.environ["AZURE_DEVOPS_ORG"] = args.azure_org
+    os.environ["AZURE_DEVOPS_PROJECT"] = args.azure_project
+    os.environ["AZURE_DEVOPS_PAT"] = args.azure_pat
+    os.environ["PATCHWORK_DB_PATH"] = args.db_path
+
+    # Set up signal handlers for graceful shutdown
+    log_message("Setting up signal handlers for graceful shutdown")
+    signal.signal(signal.SIGINT, signal_handler)
+    signal.signal(signal.SIGTERM, signal_handler)
+    
+    # Print configuration
+    log_message("Configuration:")
+    log_message(f"  Patchwork Host: {args.patchwork_host}")
+    log_message(f"  Azure DevOps Organization: {args.azure_org}")
+    log_message(f"  Azure DevOps Project: {args.azure_project}")
+    log_message(f"  Database Path: {args.db_path}")
+    
+    # Import from patchwork_runner.py
+    log_message("Importing patchwork_runner module")
+    import patchwork_runner
+    
+    # Initialize database using the same database setup from patchwork_runner
+    log_message("Initializing database connection")
+    mydb = patchwork_runner.SQLiteDatabase(args.db_path)
+    
+    # Use the jobs_list from patchwork_runner instead of creating our own
+    # This ensures all defined pipelines will be used
+    jobs = patchwork_runner.jobs_list
+    
+    log_message(f"Using {len(jobs)} jobs from patchwork_runner.py:")
+    for job in jobs:
+        log_message(f"  - {job.name} (Pipeline ID: {job.azure_pipeline_id})")
+
+    log_message("Starting monitor - Press Ctrl+C to stop")
+    
+    # No need to create tables as that's handled in patchwork_runner.py's __main__ section
+    
+    # In minutes
+    start_time = time.time()
+    end_time = start_time
+    
+    # Main monitoring loop
+    try:
+        while True:
+            time_interval = (end_time - start_time) / 60 + 10
+            start_time = time.time()
+            
+            try:
+                # Routine checking message at DEBUG level to reduce noise during normal operation
+                log_message(f"Checking for new patches (looking back {int(time_interval)} minutes)...", LOG_DEBUG)
+                patch_list = patchwork_runner.fetch_and_process_patches(mydb, jobs, time_interval)
+                
+                if not patch_list:
+                    # Use DEBUG level for the common "no patches" case
+                    log_message("No new patches found, sleeping for 1 minute...", LOG_DEBUG)
+                    time.sleep(60*1)
+                    log_message("Waking up from sleep", LOG_DEBUG)
+                else:
+                    # Use INFO level when we actually find patches (important event)
+                    log_message(f"Processed {len(patch_list)} patches")
+            except Exception as e:
+                # Always use ERROR level for exceptions
+                log_message(f"ERROR: Exception while processing patches: {str(e)}", LOG_ERROR)
+                import traceback
+                log_message(f"Traceback: {traceback.format_exc()}", LOG_ERROR)
+                log_message("Continuing in 60 seconds...", LOG_WARNING)
+                time.sleep(60)
+                log_message("Resuming after error", LOG_WARNING)
+                
+            end_time = time.time()
+            cycle_duration = end_time - start_time
+            # Use DEBUG for routine cycle completion
+            log_message(f"Monitoring cycle completed in {cycle_duration:.2f} seconds", LOG_DEBUG)
+    finally:
+        # Clean up
+        log_message("Closing database connection")
+        mydb.close()
+        log_message("Monitor stopped.")
+
+if __name__ == "__main__":
+    main()
diff --git a/sqlite_helper.py b/sqlite_helper.py
new file mode 100644
index 0000000..331a4f7
--- /dev/null
+++ b/sqlite_helper.py
@@ -0,0 +1,166 @@
+import sqlite3
+import os
+import time
+import threading
+
+class SQLiteDatabase:
+    """
+    SQLite database helper for FFmpeg Patchwork CI
+    Provides simplified interface for database operations
+    """
+
+    def __init__(self, db_path):
+        """
+        Initialize the SQLite database
+        
+        Args:
+            db_path: Path to the SQLite database file
+        """
+        self.db_path = db_path
+        self.connection = None
+        self._connect()
+
+    def _connect(self):
+        """Create a new database connection"""
+        # Ensure directory exists
+        os.makedirs(os.path.dirname(os.path.abspath(self.db_path)), exist_ok=True)
+        self.connection = sqlite3.connect(self.db_path)
+        self.connection.row_factory = sqlite3.Row
+
+    def get_cursor(self):
+        """Get a database cursor, reconnecting if necessary"""
+        try:
+            # Test connection
+            self.connection.execute("SELECT 1")
+        except (sqlite3.Error, AttributeError):
+            # Reconnect if connection is lost or was never established
+            self._connect()
+        
+        return self.connection.cursor()
+
+    def create_missing_table(self, name, columns):
+        """
+        Create a table if it doesn't exist
+        
+        Args:
+            name: Table name
+            columns: SQL column definitions as a string
+        """
+        cursor = self.get_cursor()
+        cursor.execute(f"SELECT name FROM sqlite_master WHERE type='table' AND name=?", (name,))
+        if cursor.fetchone() is not None:
+            # print(f"Table {name} already exists")
+            return
+
+        query = f"CREATE TABLE {name} {columns}"
+        # print(query)
+        cursor.execute(query)
+        self.connection.commit()
+        return
+
+    def query(self, table_name, keys, filter_command=""):
+        """
+        Execute a SELECT query and return the first matching row
+        
+        Args:
+            table_name: Table to query
+            keys: List of column names to select
+            filter_command: WHERE clause and other SQL filters
+            
+        Returns:
+            Dict containing the query results, or empty dict if no match
+        """
+        cursor = self.get_cursor()
+
+        str_cols = ", ".join(keys)
+        sql_query = f"SELECT {str_cols} FROM {table_name} {filter_command}"
+        # print(sql_query)
+        cursor.execute(sql_query)
+        db_out = cursor.fetchone()
+        out = {}
+        if not db_out:
+            return out
+        
+        for k in keys:
+            out[k] = db_out[k]
+        return out
+
+    def query_all(self, table_name, keys, filter_command=""):
+        """
+        Execute a SELECT query and return all matching rows
+        
+        Args:
+            table_name: Table to query
+            keys: List of column names to select
+            filter_command: WHERE clause and other SQL filters
+            
+        Returns:
+            List of dicts containing the query results
+        """
+        cursor = self.get_cursor()
+
+        str_cols = ", ".join(keys)
+        sql_query = f"SELECT {str_cols} FROM {table_name} {filter_command}"
+        # print(sql_query)
+        cursor.execute(sql_query)
+        db_out = cursor.fetchall()
+        
+        results = []
+        for row in db_out:
+            out = {}
+            for k in keys:
+                out[k] = row[k]
+            results.append(out)
+        return results
+
+    def insert(self, table, key_value_dict):
+        """
+        Insert a new row into a table
+        
+        Args:
+            table: Table name
+            key_value_dict: Dict mapping column names to values
+        """
+        cursor = self.get_cursor()
+
+        keys = list(key_value_dict.keys())
+        values = list(key_value_dict.values())
+        
+        placeholders = ", ".join(["?" for _ in keys])
+        keys_str = ", ".join(keys)
+
+        sql_request = f'INSERT INTO {table} ({keys_str}) VALUES ({placeholders})'
+        # print(f"{sql_request} with values {values}")
+        cursor.execute(sql_request, values)
+        self.connection.commit()
+        return cursor.lastrowid
+
+    def update(self, table, ref_key, ref_value, keys, values):
+        """
+        Update existing rows in a table
+        
+        Args:
+            table: Table name
+            ref_key: List of column names to use in WHERE clause
+            ref_value: List of values corresponding to ref_key
+            keys: List of column names to update
+            values: List of new values corresponding to keys
+        """
+        cursor = self.get_cursor()
+
+        set_clauses = [f"{k} = ?" for k in keys]
+        where_clauses = [f"{k} = ?" for k in ref_key]
+        
+        str_set = ", ".join(set_clauses)
+        str_where = " AND ".join(where_clauses)
+
+        sql_request = f'UPDATE {table} SET {str_set} WHERE {str_where}'
+        # print(f"{sql_request} with values {values + ref_value}")
+        cursor.execute(sql_request, values + ref_value)
+        self.connection.commit()
+
+    def close(self):
+        """Close the database connection"""
+        if self.connection:
+            self.connection.close()
+            self.connection = None
diff --git a/test_patchwork_permissions.py b/test_patchwork_permissions.py
new file mode 100644
index 0000000..bb219af
--- /dev/null
+++ b/test_patchwork_permissions.py
@@ -0,0 +1,129 @@
+#!/usr/bin/env python3
+"""
+Test script for verifying Patchwork API permissions.
+This script helps diagnose permission issues with the Patchwork API token.
+"""
+
+import requests
+import os
+import sys
+import argparse
+import json
+
+def test_patchwork_token(host, token, patch_id=None):
+    """
+    Test Patchwork API token permissions
+    
+    Args:
+        host: Patchwork host (e.g., patchwork.ffmpeg.org)
+        token: Patchwork API token
+        patch_id: Optional patch ID to test check creation
+        
+    Returns:
+        None, prints results to stdout
+    """
+    base_url = f"https://{host}"
+    headers = {"Authorization": f"Token {token}"}
+    
+    print("Testing Patchwork API token permissions...")
+    print(f"Host: {host}")
+    print(f"Token: {token[:5]}...{token[-5:]}")
+    
+    # Test 1: Get API root - should always work if token is valid
+    print("\n1. Testing API root access...")
+    url = f"{base_url}/api/"
+    
+    try:
+        response = requests.get(url, headers=headers)
+        print(f"Status code: {response.status_code}")
+        
+        if response.status_code >= 200 and response.status_code < 300:
+            print("✓ API root access: SUCCESS")
+            try:
+                print(f"Available endpoints: {', '.join(response.json().keys())}")
+            except:
+                print("Could not parse endpoints")
+        else:
+            print("✗ API root access: FAILED")
+            print(f"Response: {response.text}")
+            print("Your token may be invalid or expired.")
+            return
+    except Exception as e:
+        print(f"✗ API root access: FAILED - Error: {str(e)}")
+        return
+    
+    # Test 2: List projects - read access check
+    print("\n2. Testing project list access (read permissions)...")
+    url = f"{base_url}/api/projects/"
+    
+    try:
+        response = requests.get(url, headers=headers)
+        print(f"Status code: {response.status_code}")
+        
+        if response.status_code >= 200 and response.status_code < 300:
+            print("✓ Project list access: SUCCESS")
+            projects = response.json()
+            if projects:
+                print(f"Found {len(projects)} projects, first project: {projects[0]['name']}")
+        else:
+            print("✗ Project list access: FAILED")
+            print(f"Response: {response.text}")
+    except Exception as e:
+        print(f"✗ Project list access: FAILED - Error: {str(e)}")
+    
+    # Test 3: Post a check (if patch_id provided)
+    if patch_id:
+        print(f"\n3. Testing check creation for patch ID {patch_id} (write permissions)...")
+        url = f"{base_url}/api/patches/{patch_id}/checks/"
+        
+        payload = {
+            "state": "success",
+            "context": "make_x86",
+            "description": "Testing API permissions",
+            "description_long": "This is a test post from the permission testing script."
+        }
+        
+        try:
+            response = requests.post(url, headers=headers, data=payload)
+            print(f"Status code: {response.status_code}")
+            
+            if response.status_code >= 200 and response.status_code < 300:
+                print("✓ Check creation: SUCCESS")
+                print("Your token has write permissions for posting check results!")
+            else:
+                print("✗ Check creation: FAILED")
+                print(f"Response: {response.text}")
+                
+                if response.status_code == 403:
+                    print("\nPERMISSION ERROR: Your token does not have write permissions.")
+                    print("You need to generate a new token with the appropriate scope.")
+                    print("Visit the Patchwork web interface, go to your profile settings,")
+                    print("and create a new API token with the 'write' permission.")
+        except Exception as e:
+            print(f"✗ Check creation: FAILED - Error: {str(e)}")
+    else:
+        print("\n3. Skipping check creation test (no patch ID provided)")
+        print("To test check creation, run this script with the --patch-id parameter.")
+    
+    # Summary
+    print("\nSUMMARY:")
+    if not patch_id:
+        print("Basic token validation complete. For complete testing, run with --patch-id.")
+    print("If you're seeing 403 errors in the main application, you likely need a token with write permissions.")
+    print("Make sure your token has the 'write' scope in the Patchwork web interface.")
+
+if __name__ == "__main__":
+    parser = argparse.ArgumentParser(description="Test Patchwork API token permissions")
+    parser.add_argument("--host", default=os.environ.get("PATCHWORK_HOST", "patchwork.ffmpeg.org"),
+                        help="Patchwork host (default: from PATCHWORK_HOST env var or 'patchwork.ffmpeg.org')")
+    parser.add_argument("--token", default=os.environ.get("PATCHWORK_TOKEN", ""),
+                        help="Patchwork API token (default: from PATCHWORK_TOKEN env var)")
+    parser.add_argument("--patch-id", type=int, help="Optional patch ID to test check creation")
+    
+    args = parser.parse_args()
+    
+    if not args.token:
+        print("ERROR: No token provided. Use --token or set PATCHWORK_TOKEN environment variable.")
+        sys.exit(1)
+    
+    test_patchwork_token(args.host, args.token, args.patch_id)

commit 1b89ec19cf9c04461d03d02267103ae5419f0bd4
Author:     softworkz <softworkz at hotmail.com>
AuthorDate: Wed May 28 19:31:27 2025 +0200
Commit:     softworkz <softworkz at hotmail.com>
CommitDate: Wed May 28 19:31:27 2025 +0200

    initial commit

-----------------------------------------------------------------------


hooks/post-receive
-- 
UNNAMED PROJECT


More information about the ffmpeg-cvslog mailing list