When you’re deep into Python development, knowing exactly which version you’re working with can save you from a lot of headaches. You might think you know, but trust me, the details matter. A simple command can reveal the version that’s actively in use. You can do this right from your terminal or command prompt.
# To check the Python version python --version
This command will typically output something like Python 3.8.5
. If you’re on a system with multiple versions of Python, you may need to specify python3
instead.
# Checking Python 3 version specifically python3 --version
This distinction is crucial, especially when libraries and dependencies vary across Python versions. Python 2.x and 3.x are not compatible, and many libraries have dropped support for Python 2 altogether.
Samsung 990 EVO Plus SSD 4TB, PCIe Gen 4x4 | Gen 5x2 M.2, Speeds Up-to 7,250 MB/s, Upgrade Storage for PC/Laptops, HMB Technology and Intelligent Turbowrite 2.0, (MZ-V9S4T0B/AM)
25% OffIf you want more than just the version number, you can get detailed information about your Python environment using the sys
module. Here’s how you can do that:
import sys print("Python version") print(sys.version) print("Version info.") print(sys.version_info)
This will give you a full breakdown, including major, minor, and micro versions. You can also check if you’re in a virtual environment:
import sys import os if hasattr(sys, 'base_prefix') and sys.base_prefix != sys.prefix: print("You are in a virtual environment.") else: print("You are not in a virtual environment.")
Understanding your Python environment can also help you troubleshoot issues when dependencies conflict. If you’re seeing unexpected behavior, the version you’re running could be the culprit.
Another common pitfall is the presence of multiple Python installations on your computer. On Windows, for example, you might find both Python 3.x and Python 2.x installed. If you’re using a package manager like pip
, be sure you’re calling the correct version:
# To install a package for Python 3 pip3 install
Without specifying pip3
, you might inadvertently install packages for Python 2, leading to runtime errors that can take hours to debug. If you’re unsure which pip
you’re using, you can verify it this way:
import pip print("Pip version:", pip.__version__)
For more advanced users, you might delve into virtual environments using venv
or virtualenv
. Here’s how you can create a virtual environment and activate it:
# Create a new virtual environment python -m venv myenv # On Windows, activate it myenvScriptsactivate # On macOS/Linux, activate it source myenv/bin/activate
Activating a virtual environment ensures that all packages installed while you’re working in that context don’t interfere with your global Python environment. It creates an isolated space tailored to your specific project needs.
This practice becomes even more important when running scripts that have distinct dependencies. You don’t want to create a five-alarm fire with conflicting requirements. So, here’s a clean way to run a script from your virtual environment:
# Running a script within the environment python my_script.py
With this approach, you can maintain organization and avoid version conflicts, ensuring your code remains clean and functional. It’s amazing how a little organization can prevent chaos…
Running other scripts without creating a five-alarm fire
But the real fun begins when your script needs to run another script. This is a common requirement in build systems, data pipelines, or any complex application where you want to delegate tasks to specialized tools. The most obvious way to do this, and I see this all the time in code that gives me hives, is to use os.system
.
import os # The quick, dirty, and catastrophically bad way. user_provided_filename = "my_report; rm -rf /" os.system(f"python generate_report.py --filename {user_provided_filename}")
Do you see the problem? os.system
just hands the command string over to the system’s shell. If any part of that string comes from a user, or a file name, or anything you don’t have absolute 100% control over, you have just created a massive security hole called a shell injection vulnerability. In the example above, you didn’t just generate a report, you started deleting your entire hard drive. This is not a “five-alarm fire”; this is an extinction-level event for your server.
The correct, modern, and non-insane way to do this is with the subprocess
module. It was designed specifically to replace the jungle of insecure and platform-dependent functions like os.system
and os.popen
. The key difference is that you pass the command and its arguments as a list of strings. This completely bypasses the shell.
import subprocess # The safe and correct way. user_provided_filename = "my_report; rm -rf /" command = [ "python", "generate_report.py", "--filename", user_provided_filename ] subprocess.run(command)
In this version, the malicious string "my_report; rm -rf /"
is passed as a single, harmless argument to generate_report.py
. The script might fail because it can’t find a file with that ridiculous name, but your server will still be there tomorrow. The shell never gets a chance to interpret the semicolon or the rm
command.
But we can do better. Which python
did we just run? Is it the one from our virtual environment? Maybe. It depends on the system’s PATH
. You’re leaving it to chance. To guarantee you’re using the exact same Python interpreter that is running your main script, you should use sys.executable
.
import subprocess import sys # The robust, safe, and professional way. command = [ sys.executable, # Explicitly use the current Python interpreter "data_processor.py", "--mode", "fast" ] # Capture output and check for errors try: result = subprocess.run( command, capture_output=True, text=True, # Decodes stdout/stderr as text check=True # Raises CalledProcessError if return code is non-zero ) print("Script succeeded! Output:") print(result.stdout) except subprocess.CalledProcessError as e: print(f"Script failed with return code {e.returncode}") print("STDERR:") print(e.stderr)
Now we’re talking. We’re using sys.executable
to ensure we’re using the Python from our virtual environment, not some random system Python. We’re using capture_output=True
and text=True
to get the standard output and standard error streams back as clean strings. And best of all, we’re using check=True
, which acts as a safety net. If the script we call returns an error code, subprocess.run
will raise a CalledProcessError
automatically, preventing your main script from blindly continuing on as if everything worked. This gives you control, security, and predictability.
Source: https://www.pythonlore.com/exploring-sys-executable-for-interpreter-path/