Returns the absolute path from given path to master path.
When a list is given, returns a list with all its path members absolutized from master path.
Example:
>>> masterpath ()
'/abs/path/to/project/'
>>> masterpath ('path/')
'/abs/path/to/project/path/'
>>> masterpath (['whatever/inc', 'src'])
['/abs/path/to/project/whatever/inc', '/abs/path/to/project/src']
Returns the absolute path for the build file being executed, or the absolute file after joining current path.
If given parameter is a list, it will return a list of all the internal paths absolute to current workpath.
Please note that absolute paths will remain unchanged.
Example:
>>> workpath ()
'/abs/path/to/project/current/dir/'
>>> workpath ('path')
'/abs/path/to/project/current/dir/path'
>>> workpath (['inc', 'src'])
['/abs/path/to/project/current/dir/inc', '/abs/path/to/project/current/dir/src']
>>> workpath ('/abs/path')
'/abs/path'
Python decorator to easily define rules when declaring python functions.
This decorator can be used with the following arguments:
_creates: list of argument names of the function we are decorating that are going to be updated or created. It can also be specified as a comma-separated string of argument names.
_deletes: list of argument names of the function we are decorating that are going to be updated
_from: list of argument names of the function we are decorating that are going to be used as input
_from argument is specified and defaults to True when it is not. You can specify a valid string or a function call. Read more about build policies for details.
_if: this is the build policy that you want to use. Build policy defaults to ‘updated’ when
_find_folder_deps: if True it will scan through all input files in order to see which ones reference a folder, and in that case, it will wait until all previous operations on that folder are finished before running this decorated method.
Here a simple example so you get the idea about how to use this decorator:
@pyke.ruledef (_deletes = 'file')
def rmfile (file):
... rmfile body ...
@pyke.ruledef (_creates = 'target', _from = 'sources', _if = 'missing')
def cptree (sources, target, ignore_errors = False):
... cptree body ...
@pyke.ruledef (_creates = 'target', _from = 'src1, src2')
def joinfiles (target, src1, src2, ignore_errors = False):
... joinfiles body ...
# overriding default policy
cptree (srcdir, targetdir, _if = 'updated')
NOTE: the function arguments that _creates and _deletes reference, can only be strings or simple lists of strings, nested lists will be converted to simple lists. All strings will be absolutized from the current working dir.
Example:
>>> @pyke.ruledef (_creates = 'arg_name')
>>> def function (arg_name):
...
>>> function ('asdf')
will internally call the original function as
function ('/path/to/file/asdf')
>>> function (['asdf'])
will internally call the original function as
function (['/path/to/file/asdf'])
>>> function (['a', ['b'], ['c']])
will internally call the original function as
function (['/path/to/file/a', '/path/to/file/b', '/path/to/file/c'])
Returns the associated value to an option passed to pyke in the commandline interface by using the “-o” argument.
Example:
>>> # ./pyke -o mykey
>>> pyke.option ('mykey')
True
>>> # ./pyke -o mykey=myvalue
>>> pyke.option ('mykey')
'myvalue'
Print a string using a global mutex
Use this function to import rules defined in other pyke or python file.
You can only import one rule file at a time.
For example:
importRules ('myrules.py')
importRules ('rules/arch-%s.py' % g.arch.name)
In fact, the following two lines are fairly similar in behaviour:
from myrules import *
importRules ('myrules.py')
The main difference is that when importing rules with this function, the global ‘g’ variable is available inside myrules.py, as well as everything (functions, classes, ...) that have been defined in the pyke file before that file has been imported.
All variables on that file will be added to the global variables and will be available everywhere (existing variables will be replaced).
Loads a local or remote package module and returns the loaded module object. An exception will be raised if the package cannot be found.
Note that if the package is not in the system, it will try to download from the pykebuildtool repository. Also note that http and https protocols are supported, which means you can reference a package on the internet or your local intranet.
All remote packages will be downloaded and cached in disk.
When only the package name is given, it will proceed to search for that package in the following locations and order:
- first, try to load the package from local directory in the build dir
- then, try to load the package from a zip file in the build dir
- then, try to load the package from a directory in the master dir
- then, try to load the package from a zip file in the master dir
- finally, check if it is a zip file in the pyke local repository
When the package cannot be found anywhere it will try to download the package from the pyke public repository (http://packages.pykebuildtool.com).
When a URL is given, it will check first if already downloaded and if not it will try to download it.
When everything fails, it will raise an error.
Example:
>>> localpkg = pyke.require ('local-package')
>>> localpkg.rules.dummyrule ()
>>> dummy = pyke.require ('dummy-package-1.2.7')
>>> dummy.rules.dummyrule ()
>>> dummy = pyke.require ('http://any.server.com/path/to/dummy-package-1.2.7.zip')
>>> dummy.rules.dummyrule ()
Regarding package versioning, versions should be appended to the name using only numbers and dots, and separated from the main package name by a slash (e.g: package-1.0.3). The directory inside the zip should not contain any version in the name.
For example, if your package is called ‘my-dummy-package’, and you want to create the version 1.2.3 then the package deliverable should be ‘my-dummy-package-1.2.3.zip’, the zip file should contain a directory named ‘my-dummy-package’ whose name, as you can see, matches the name of the zip file, and that directory must contain at least a file named ‘__init__.py’ or ‘__init__.pyc’, as any python module would.
Just to make it clear, if you uncompress a package, this is what you get:
$ unzip -t 'my-dummy-package-1.2.3.zip'
my-dummy-package/
my-dummy-package/__init__.py
my-dummy-package/otherfile.py
Note
We encourage you to include concrete version numbers as part of the package name, otherwise the latest package might be downloaded and cached, which might then lead to different build environments in different machines. This would be far from having deterministic builds, which is what you should aim for.
Includes one or more pyke build files.
You can specify one or more build files or folders, so they will be included and processed after current file.
You can pass pyke build files:
include ('core/build.pyke')
include ('core/construct-the-core.pyke')
As well as folders (it will look for “build.pyke” inside), e.g:
include ('core/')
And you can also pass several folders/files at once, e.g:
include ('core', 'model', 'app', 'view/build-the-view.pyke')
include (['core', 'model', 'app'])
Please note that the build files will be processed in the same order as included, so in last example, from ‘model’ build scripts you can use anything built on ‘core’ module, but not the other way around.
You should also note that each included file will have no global variables, thus, avoiding conflicts on big projects. There are several ways to share variables between build files, but you should probably want to use the global ‘g’ variable.
Another way to import variables from an included file is to pass this function either the “import_all” set to True (so everything will be imported), or “vars_to_import” argument with a regular expression or a list of regular expressions of the variables to import.
Please note that ALL variables defined in the master.pyke are global by design.
The idea of this method is to simplify the use of deferred resources that are only needed within the context of one or more jobs
>>> class RandomGenerator: def __init__ (self): self.randnum = None
- def run (self):
- self.randnum = random.randint(0, 100)
>>> deferred_param = pyke.deferred (RandomGenerator(), lambda obj: obj.randnum) >>> pyke.rules.echo (deferred_param)
This method also supports getting a lambda/function as a parameter, in which case it will be executed before the main job is executed.
>>> # wait for 3 seconds to run this rule
>>> pyke.rules.putfile ('myfile', 'file contents', _needs = pyke.deferred (lambda: time.sleep(3)))
Note
The object returned by deferred function will, at the right time, be invoked this way:
>>> myobject.run()
>>> formatter (myobject) # produces the actual output!
>>> myobject.cleanup()
but only before the rule is going to be executed, otherwise, if the rule is not going to be executed, the ‘run’ method and the formatter, will never be called.
Helper class to access attributes of objects instantiating this class as if they were a dictionary.
Example:
>>> obj = AttrDict()
>>> obj.name = "whatever"
>>> obj.name
"whatever"
>>> obj.has ("name")
True
>>> obj.set ("var_name", "value")
>>> obj.var_name
"value"
>>> obj.get ("unknown", "default")
"default"
Returns the value associated to given name or the default value if no such a variable with given name
>>> obj.name = "value"
>>> obj.get ("name")
"value"
>>> obj.get ("name", default = "asdf")
"value"
>>> obj.get ("unknown", default = "asdf")
"asdf"
Set or overwrite the value of the internal variable named “name” with given value.
>>> obj.set ("whatever", "first")
>>> obj.set ("whatever", "second")
"first"
This class is intended to handle the current build platform information
Returns the full representation of the arch string
Return the linux flavour if any linux is detected None will be returned otherwise
Returns a normalized native machine string
Returns the brief representation of the arch string which does not include the build mode, but includes the OS, the MACHINE and the TOOLCHAIN
Returns a list containing the build components of the form “<CPU>-<OS>-<TOOLCHAIN>-<MODE>” the tuple contains CPU, OS, TOOLCHAIN, MODE
>>> BuildPlatform.split("x86-ubuntu-gcc-debug")
['x86', 'ubuntu', 'gcc', 'debug']
Helper class used to trick the interpreter and stop processing the current pyke build file being interpreted.
See ‘pyke.build.exit’ for details.
Returns the absolute path string to the file or set of files using given absolute path or assuming a path from current working directory.
Please note that virtual files are already absolutized
When a single file is passed as a string, a string is returned When a list of files is passed, a list of files is returned When normalize_dirs is True, it will check if the resulting file is a directory in the current filesystem. This is usually useful when normalizing commands that might accept dirs as input
Get the information associated to given file using given key or return the default value if no information has been stored to that path
Keys starting with ‘_’ will not be persisted, meaning the data will be available for one execution only.
Associate given (key,value) pair to given path file in the current graph
When a file changes, all its cached key, value pairs are not readable anymore.
Also, note that keys starting with ‘_’ will not be persisted, meaning the data will be available for one execution only.
Specify the storage file for the cache, so that, if exists, it will be loaded, and at the end of the build it will be saved.
Returns the build duration in seconds
Executes a job on given node and returns a tuple containing (exitcode, stdout, stderr).
If simulation is enabled, it does nothing.
Execute a build file in the current context
This function causes the executor to immediately stop processing the current build file without disrupting the normal processing of the parent script (if any).
Finds all build files inside given dir
Finds the master file inside given path If it cannot find the master inside given path, it will recursively try to find the file on the parent folder.
Default method for displaying job commands.
This method returns a unicode string that represents given job, taking into account the current configuration parameters (for example, the display mode, ...)
Expands given list of patterns taking into account the newly created files.
An absolutized list of files will be returned for those files that match given pattern.
If relative paths are provided, and no absolute path is provided, it will use the current workpath.
When patterns are provided, the newly created list of files is examined, whereas if a list of files is passed as first argument, those files will be absolutized and returned because it will be assumed that if they don’t exit they will exist.
When recursive_dirs is True and some of the patterns are a directory, all internal contents will be recursively expanded, so that expanding (‘dir/’) and expanding (‘dir/**‘) will be equivalent
Example:
>>> glob ('src/*.cc')
['/abs/path/to/file1.cc', ...]
Use this function to import rules defined in other pyke or python file.
You can only import one rule file at a time.
For example:
importRules ('myrules.py')
importRules ('rules/arch-%s.py' % g.arch.name)
In fact, the following two lines are fairly similar in behaviour:
from myrules import *
importRules ('myrules.py')
The main difference is that when importing rules with this function, the global ‘g’ variable is available inside myrules.py, as well as everything (functions, classes, ...) that have been defined in the pyke file before that file has been imported.
All variables on that file will be added to the global variables and will be available everywhere (existing variables will be replaced).
Includes one or more pyke build files.
You can specify one or more build files or folders, so they will be included and processed after current file.
You can pass pyke build files:
include ('core/build.pyke')
include ('core/construct-the-core.pyke')
As well as folders (it will look for “build.pyke” inside), e.g:
include ('core/')
And you can also pass several folders/files at once, e.g:
include ('core', 'model', 'app', 'view/build-the-view.pyke')
include (['core', 'model', 'app'])
Please note that the build files will be processed in the same order as included, so in last example, from ‘model’ build scripts you can use anything built on ‘core’ module, but not the other way around.
You should also note that each included file will have no global variables, thus, avoiding conflicts on big projects. There are several ways to share variables between build files, but you should probably want to use the global ‘g’ variable.
Another way to import variables from an included file is to pass this function either the “import_all” set to True (so everything will be imported), or “vars_to_import” argument with a regular expression or a list of regular expressions of the variables to import.
Please note that ALL variables defined in the master.pyke are global by design.
Returns TRUE if given file has been scheduled in order to be built at some point in the graph
Returns TRUE when given path has an associated value for given key and return False otherwise
Returns True if the file calling this function is the pyke masterfile. This can be a useful function to be used when combining multiple independent projects together, so the pyke-rules are loaded only when the subproject is ran independently.
>>> if isMasterFile():
>>> importRules ('myrules.py')
>>> include (['required-component'])
Returns the list of all the files that have been created by the last JobNode added to the Job Graph. Basically if you called a rule, it will return the same files created by that rule.
Returns the absolute path from given path to master path.
When a list is given, returns a list with all its path members absolutized from master path.
Example:
>>> masterpath ()
'/abs/path/to/project/'
>>> masterpath ('path/')
'/abs/path/to/project/path/'
>>> masterpath (['whatever/inc', 'src'])
['/abs/path/to/project/whatever/inc', '/abs/path/to/project/src']
Returns a list of all normalized targets in the current graph, this way we could easily find dependencies when looking for files that have not yet been generated (because the graph has not been executed).
The list of files also includes the list of directories to output files.
When no parameter is passed, it returns the absolute path where pyke is installed. If a file is passed, it joins given path to pyke path.
Returns the relative route from master path to given path
This function is usually called when generating brief outputs on pyke, to avoid showing long paths.
The main aim of this function is to execute a build script in order to add more nodes to the graph. By default, all variables defined on each script are private to that script, and all classes and functions are public to all files.
The use of vars_to_import allows sharing private script varibles to the global namespace, thus, allowing other scripts to use them.
Please be aware that build scripts are not intended to run any command at all, just to prepare the system to do so.
It is important to note that although private variables are not exported to the public namespace, variables sharing the same name than a variable in the public namespace will replace those variables or functions.
Load and run the master build script.
Specifies a new target section that will be executed only when running pyke with ‘-t’ parameter.
By default all jobs are created in the section ‘all’. Jobs under a new section are executed only when you run pyke with ‘-t’ parameter.
For example, if you add rules after section (‘test’), then, the only way of running those rules would be running:
$ pyke -t test
There are two special sections, ‘all’ which is the default for all pyke files that are processed, and the ‘clean’, which is for cleaning up a build, and are automatically created by all the builtin rules.
Example of usage:
# by default all scripts start as if you run next line (but you don't need to)
section ('all')
# run some rules here...
section ('test')
# run some other rules here...
Calling this function with no parameters or with True is equivalent to calling Along with serialStart() whereas calling the method with serial_mode = False will be equivalent to calling serialEnd()
Example:
>>> pyke.build.serial()
...
>>> pyke.build.serial(False)
All jobs defined after this point will run in parallel again after calling this function.
Please note that all pyke scripts start running in parallel by default.
Along with serialEnd() defines a block where jobs will be executed serially, no matter how many threads are being used to run things in parallel. All jobs created within these two functions will run one after another.
Take the following example:
>>> shell ('echo -n 1')
>>> shell ('echo -n 2')
>>> shell ('echo -n 3')
Since commands have no dependencies it is not guaranteed that ‘123’ is written. Depending on the number of threads it might end up writting ‘132’ or ‘231’ instead.
The only way to guarantee that the commands are executed one after another is either running the script using only 1 thread, or placing the commands inside a serialStart() and serialEnd() block:
>>> serialStart ()
>>>
>>> shell ('echo -n 1')
>>> shell ('echo -n 2')
>>> shell ('echo -n 3')
>>>
>>> serialEnd ()
This construct would always produce ‘123’
Set the current master file & master path
Set the current master path folder
Changes the current workpath
Returns True or False on wether given node should be built.
is_build_required takes the expected behaviour and the returned value will enforce a different behaviour.
Returns the timestamp in seconds at which the build started
Returns the absolute path for the build file being executed, or the absolute file after joining current path.
If given parameter is a list, it will return a list of all the internal paths absolute to current workpath.
Please note that absolute paths will remain unchanged.
Example:
>>> workpath ()
'/abs/path/to/project/current/dir/'
>>> workpath ('path')
'/abs/path/to/project/current/dir/path'
>>> workpath (['inc', 'src'])
['/abs/path/to/project/current/dir/inc', '/abs/path/to/project/current/dir/src']
>>> workpath ('/abs/path')
'/abs/path'
This function creates a CliTool from given argument specification, given set of default values, and a class inheriting from CliTool (in order to override some functions or behaviours).
argspec is the templatized command line string. argspec can be either a string or a list of arguments.
In case a string is provided, it will treat spaces, tabs and newlines as separators between arguments. Internally it will be parsed and splitted and converted to a CliTool object.
The first argument is always the command we want to run. If the full path is not specified, it will be guessed using system path.
Templatized arguments are enclosed in braces and have a special format in order to assign an alias, initialize the default value and specify if it is an input or an output file.
Please note that you can customize the option opening string and the closing string, to avoid issues when defining rules.
NOTE: ALL TEMPLATIZED ARGUMENTS SHOULD END IN EITHER ‘>]’ OR ‘)]’ If you need to include that pattern inside the default value of an argument, please feel free to pass a list inside the argspec instead of an string.
Specifying input and output files is ESSENTIAL for pyke to work.
This function will return a CliTool object.
Example using string as a template:
cc = easycli(
'''
g++ -g -Wall
[-c<compile_only=true>]
[-I<includes=[]>]
[-O<optimize={'no' : 0, 'fast**' : 1, 'faster' : 3>]
[-D<defines=['DEBUG']>]
[-o<output=""> (OUT)]
[sources=[] (IN)]
''',
instances = 'tools.compiler.CCompiler'
)
Another example using a list and different delimiters:
python_touch = easycli ([
pyke.sysutil.findapp ('python'),
'-c',
"import sys; f = open(sys.argv[1], 'w+b'); f.close(); sys.exit(0);",
'{{<output> (OUT)}}'
],
delimiters = ('{{', '}}')
)
Please note that in the example above, the use of braces for the ‘[1]’ can be troublesome, so we change delimiters to avoid conflicts.
This class will handle a single command line parameter. A parameter is defined by the option (also called flag or switch) and the actual argument to that option.
This class provides a method to expand arguments with the options. This class is not intended to be used for parsing commandlines but for creating them along with CliTool.
arg_type should be specified to know if the argument is expected to represent a local file in the filesystem as an input (‘in’) or output (‘out’)
>>> # simple expansion
>>> p = CliParam ('-k')
>>> p.expand (True)
['-k']
>>> p.expand ('asdf')
['-kasdf']
>>> p.expand (['a', 'b', 'c'])
['-ka', '-kb', '-kc']
>>> # expansion of an option with space at the end
>>> p = CliParam ('-k ')
>>> p.expand (True)
['-k']
>>> p.expand ('asdf')
['-k', 'asdf']
>>> p.expand (['a', 'b', 'c'])
['-k', 'a', '-k', 'b', '-k', 'c']
>>> # another example
>>> p = CliParam ('--name=', 'asdf')
>>> p.expand ('asdf')
['--name=asdf']
>>> p.expand (['a', 'b', 'c'])
['--name=a', '--name=b', '--name=c']
Expands current parameter as a list of arguments to be passed to a command line
Example:
>>> # Does nothing (same as specifying False or empty string)
>>> p = CliParam ('/D')
>>> p.expand([])
[]
>>> # splits each option and each item separately because of the ending space
>>> p = CliParam ('/D ')
>>> p.expand(['a', b'])
['/D', 'a', '/D', 'b']
>>> # creates one item for each value
>>> p = CliParam ('/D=')
>>> p.expand(['a', b'])
['/D=a', '/D=b']
>>> # creates one item for each value
>>> p = CliParam ('/D')
>>> p.expand(['a', b'])
['/Da', '/Db']
>>> # with empty key it just expands the values
>>> p = CliParam ('')
>>> p.expand(['a', b'])
>>> ['a', 'b']
>>> # using predefined expansion arguments
>>> p = CliParam ('--arch=', enum = {'x86' : 'i686', 'x64' : 'amd64'})
>>> p.expand ('x86')
['--arch=i686']
>>> p.expand ('misc')
['--arch=misc']
Returns TRUE if the argument is expected to be one or more input files
Returns TRUE if this argument is expected to be one or more output files
This class allows to manipulate command-line interfaces easily, allowing adding, removing, updating and cloning parameters not only based on a command-line string but by providing meaningful names for each parameter that can be easily edited and accessed guaranteeing the order of the arguments in which the command-line tool will be called.
You might want to use this class by using the createCliTool method.
Example:
>>> cc = CliTool (
'g++',
'[compile_only] [out_file] [defines] [includes] [sources]',
{
'compile_only' : CliParam ('-c', True),
'out_file' : CliParam ('-o'),
'defines' : CliParam ('-D', []),
'includes' : CliParam ('-I', []),
'sources' : CliParam (default = [])
}
)
>>> cc.defines = ['DEBUG=1']
>>> cc.defines
['DEBUG=1']
>>> cc.out_file = 'app'
>>> cc.sources = 'main.cc'
>>> cc.getArguments()
['-c', '-o', 'app', '-DDEBUG=1', 'main.cc']
>>> cc.defines += 'ENCODING=UTF_8'
>>> cc.getArguments()
['-c', '-DDEBUG=1', '-DENCODING=UTF_8', '-o', 'app', 'main.cc']
This is the default method that builds stuff
Clears all internal elements. Usually you would like to use reset instead.
Creates a copy of the current CliTool object by sharing the parameters but creating a different copy of the current data
This means that if any parameter or default value changes on either the existing object or the copied object, the other will change as well. This is by design.
You can use cli.deepcopy () to get a complete independent object.
Example:
>>> cli.name = 'cli'
>>> cli2 = cli.copy()
>>> cli2.getArguments () == cli.getArguments()
True
>>> cli2.name = 'cli2'
>>> cli2.getArguments () == cli.getArguments()
False
>>> cli.name, cli2.name
('cli', 'cli2')
As you might expect, this method returns a full copy of this object and the internal references. This operation is expensive and most of the times you are likely to prefer calling copy method instead.
Anyway, this funciton is here for your convenience.
Extends the values associated to given key by appending or prepending given string or list of strings.
If the current value is empty or not a list, it is converted to a list.
This function is used to update the output path according to some rules defined by derivated classes. By default this method returns the file itself.
Example:
# imagine we want to add current platform string before the output file name >>> gcc.fixOutputPath (‘/path/to/out.o’) ‘/path/to/_x86-ubuntu12.10-gcc/out.o’
# now imagine you want all libraries to be saved somewhere using ‘lib’ prefix >>> gcc.fixOutputPath (‘/path/to/name.a’) ‘/libpath/libname.a’
Please note that all examples require reimplementing this function by derivate classes
Returns the current value associated to given parameter key or variable name.
The default value will be returned when the parameter key is invalid, or it does not have any associated value, or the value associated is None
Example:
>>> cli.set ('name', 'value')
>>> cli.get ('name', 'missing')
'value'
>>> cli.get ('name2', 'missing')
'missing'
Returns the commandline arguments not including the tool according to the current template and the current values for the templatized variables.
Example:
>>> cc = CliTool (
'g++',
'[compile_only] [out_file] [defines] [includes] [sources]',
{
'compile_only' : CliParam ('-c', True),
'out_file' : CliParam ('-o'),
'defines' : CliParam ('-D', []),
'includes' : CliParam ('-I', []),
'sources' : CliParam (default = [])
}
)
>>> cc.defines = ['DEBUG=1']
>>> cc.defines
['DEBUG=1']
>>> cc.out_file = 'app'
>>> cc.sources = 'main.cc'
>>> cc.getArguments()
['-c', '-o', 'app', '-DDEBUG=1', 'main.cc']
Return a brief string of the current commandline.
Extract and normalize build flags commonly used removing them from given dict and a BuildFlags object.
Please note that given dict object is modified, and build flags are effectively removed from the object after this function is called
Return a tuple containing a list of input files and a list of output files that can be deducted from the current commandline, but are internal to the tool being used.
For example, imagine you are using a command that compiles a file and you have specified you want debug info. Imagine that tool, by default, generates the output file for the debug info, using the same name and path, but different extension, as the output of the file being generated.
What this means is internal builder will have no clue about this unless you explicitly add a variable handling this scenario... but doing so is sooooo tiring...
So redefining this function you’ll be able to parse the current commandline and specify those files that you know are going to be used as additional inputs or additional outputs.
Example:
>>> cl.getImplicitFiles() ([], ['/path/to/output/debug/file.pdb'])>>> gcc.getImplicitFiles () ([], [])# redefining this method >>> def newGetImplicit(self): return ([‘in’], [‘out’]) >>> cl.getImplicitFiles = newGetImplicit >>> cl.getImplicitFiles() ([‘in’, ‘out’])
Returns all the values associated to input parameters
Example:
>>> tool = CliTool (
'tool',
'[flags] [target] [sources]',
{
'flags' : CliParam ('', 'cvf')
'target' : CliParam ('-o', arg_type = 'out'),
'sources' : CliParam ('', [], arg_type = 'in')
}
)
>>> tool.target = '/path/to/output'
>>> tool.sources.append ('/path/to/first/input')
>>> tool.sources.append ('/path/to/second/input/file')
>>> tool.getInputs()
['/path/to/first/input', '/path/to/second/input/file']
Returns all the values associated to output parameters
Example:
>>> tool = CliTool (
'tool',
'[flags] [target] [sources]',
{
'flags' : CliParam ('', 'cvf')
'target' : CliParam ('-o', arg_type = 'out'),
'sources' : CliParam ('', [], arg_type = 'in')
}
)
>>> tool.target = '/path/to/output'
>>> tool.sources.append ('/path/to/first/input')
>>> tool.sources.append ('/path/to/second/input/file')
>>> tool.getOutputs()
['/path/to/output']
Returns the internal template
Returns the list of variables that are being used as a template arguments
Example:
>>> cc = CliTool (
'g++',
'[compile_only] whatever [out_file] -KS [defines] -List [includes] [sources]',
{
'compile_only' : CliParam ('-c', True),
'out_file' : CliParam ('-o'),
'defines' : CliParam ('-D', []),
'includes' : CliParam ('-I', []),
'sources' : CliParam (default = [])
}
)
>>> cc.getTemplateKeys()
['compile_only', 'out_file', 'defines', 'includes', 'sources']
Returns the internal tool
Returns True when key has been defined in a parameter, even if it has not been initialized, or if there is any private value that has been initialized using given key.
Overrides the default behaviour of a function defined on an object of this class
>>> def custom_build (self, *args, **kwargs):
print "custom build function!"
return []
>>> cc.hook ('build', custom_build, 'old_build')
Saves all the current parameters of the CliTool object as the default for this object, so if you later call the reset function, it will return to this state.
Resets the current object to the initial state defined for each parameter.
If an object or dict is passed, it will then update the internal data according to the passed object.
Updates a single argument at a time overriding any previous value or default value
Example:
>>> cli.set ('name', 'value')
>>> cli.name
'value'
>>> cli.get ('name', 'missing')
'value'
Convert given value to a list using the following rules: - None or False return empty list - Scalars return a list containing one item (the scalar itself) - Tuples or lists return a list cast of the tuple or list.
Update several command-line arguments at the same time by using a dictionary.
Please note that this method will override the current values for all the keys passed
Example:
>>> clitool.update (key_value_dict)
>>> clitool.update ({'key1' : 'value1', ..., 'keyN' : 'valueN'})
>>> clitool.update (key1 = 'value1', ..., keyN = valueN)
Convenient method that depending if a file object or a string is passed it will just be a wrapper around python json.dump method or will open and write given object in a data file.
In case a file string is passed, it will always overwrite or create the file.
>>> with open('myfile.json', 'w+t') as f:
pyke.dump (data, f)
>>> pyke.dump (data, 'myfile.json')
>>> pyke.dump (data, 'myfile.json', pretty = True)
Additionally, the parameter pretty can be passed to do a pretty print.
Wrapper around python json.dumps method
Fast method to fix lazy json documents by applying the following rules:
}
}
Same as fixLazyJson but removing comments as well
Convenient wrapper around python json.load method which uses our internal json lazy parser in case the standard one fails.
Please note that if a string is passed instead of a file object it will load given file.
See lazyloads for details.
Tries to parse given json string by using the default json parser, so in case anything goes wrong, it tries to load it using a lazy parser which is more non-standard json friendly, able to parse unquoted strings, comments, ...
If the lazy parser fails to load the json string, the default value will be returned.
>>> pyke.json.lazyloads ('{ a : "b" }')
{ 'a' : 'b' }
Convenient method that depending if a file object or a string is passed it will just be a wrapper around python json.load method or will open and read a file and parse it.
>>> with open('myfile.json', 'rt') as f:
data = pyke.load (f)
>>> data = pyke.load ('myfile.json')
Wrapper around python json.loads method
Displays messages when the current mode is ‘brief’
Always display the message regardless of the mode (even in quiet)
This method shows given msg string when display mode is verbose or debug
Displays warning messages
Returns the absolute normalized route to given path or list of paths.
An extra cwd variable can be passed as a second argument to specify the current absolute path.
Example:
>>> pyke.path.abspath ('mydir/myfile.txt')
'/abs/path/to/mydir/myfile.txt'
>>> pyke.path.abspath ('mydir/myfile.txt', '/my/abs/path')
'/my/abs/path/mydir/myfile.txt'
>>> pyke.path.abspath (['myfile1', 'myfile2'])
['/abs/path/to/myfile1', '/abs/path/to/myfile2']
Returns the basename of given path or list of paths.
Accepts strings and lists of strings or file items. Returns strings or list of strings.
Example:
>>> pyke.path.basename('/path/to/something.txt')
'something.txt'
>>> pyke.path.basename('/path/to/something/')
''
>>> pyke.path.basename(['/path/', '/another/path/to/file'])
['', 'file']
Changes the current working directory to given path and returns the old work directory. In case the function cannot switch to the new path, None will be returned
The idea is to acquire a lock and chdir to given work directory.
This function is useful due to the nature of pyke, where all pyke functions will be executed in parallel in the same process, where only one current work directory will be shared among all threads, so if one thread changes the directory, the rest of threads will have that directory changed too as a side effect.
Note
IMPORTANT: This function should be paired with a final chdirRestore call.
The idea is to unlock the chdir operation and restore the old chdir when the first chdirEnter was called.
This function is useful due to the nature of pyke, where all pyke functions will be executed in parallel in the same process, where only one current work directory will be shared among all threads, so if one thread changes the directory, the rest of threads will have that directory changed too as a side effect.
Note
IMPORTANT: This function should be paired with an initial chdirEnter call.
Replaces the file extension of a path or list of paths
Example:
>>> pyke.path.chext('something.asdf', 'xml')
'something.xml'
>>> pyke.path.chext('something.asdf', '.xml')
'something.xml'
>>> pyke.path.chext('something.asdf', '')
'something'
>>> pyke.path.chext(['f1.json', 'f2.json'], '.json.gz')
['f1.json.gz', 'f2.json.gz']
Returns the common path prefix in the list of paths. This works exactly the same as os.path.commonprefix
Example:
>>> pyke.path.commonprefix(['/path/to/file1', '/path/to/file2.txt'])
'/path/to/'
>>> pyke.path.commonprefix(['/path/to/file1', '/pa'])
'/pa'
Returns the folder part for given path or list of paths.
Example:
>>> pyke.path.dirname ('/path/to/myfile')
'/path/to/'
>>> pyke.path.dirname ('/path/to/')
'/path/to/'
>>> pyke.path.dirname ('/path/to')
'/path/'
>>> pyke.path.dirname (['../some/path/tofile', 'other/path'])
['../some/path/', 'other/']
Note
it works differently to os.path.dirname when the last character in the path is a slash. A slash at the end is always returned, thus, calling this method several times on the returned path, when the path is a dir, will always return the same path. You might want to have a look at topdir() to get the last dir of the path on each iteration.
Expand all routes from root folder to current folder.
This is almost equivalent to pyke.path.topdirs() function, but the returned list goes from the root path to given path.
For example:
>>> pyke.path.dirs2root ('/home/user/project/file.txt')
[
'/',
'/home/',
'/home/user/',
'/home/user/project/'
]
>>> pyke.path.dirs2root ('/home/user/project/folder/')
[
'/',
'/home/',
'/home/user/',
'/home/user/project/'
'/home/user/project/folder/'
]
Returns True if given path exists.
In case a list is passed, it will return True when ALL paths exists.
Example:
>>> pyke.path.exists ('/path/to/mypath')
True
>>> pyke.path.exists ('/path/to/missing-file')
False
>>> pyke.path.exists (['/path/to/file1', '/path/to/file2'])
True
>>> pyke.path.exists (['/path/to/file1', '/path/to/file2', '/path/to/missing-file'])
False
Same as os.path.expanduser but also accepts a list of paths.
Same as os.path.expandvars but also accepts a list of paths.
Filters a list of files by given extension (using default case sensitiveness for the current operating system).
More than one extension can be passed as a list, so files matching any of the extensions passed will be returned.
You might want to have a look to pyke.path.filter() for a more advanced filtering function which accepts regular expressions as well.
Example:
>>> myfiles = ['a.obj', 'a.h', 'b.h', 'c.obj', 'noext']
>>> extfilter (myfiles, '.h')
['a.h', 'b.h']
>>> extfilter (myfiles, 'obj')
['a.obj', 'c.obj']
>>> extfilter (myfiles, '.obj')
['a.obj', 'c.obj']
>>> extfilter (myfiles, ['.h', '.obj'])
['a.obj', 'a.h', 'b.h', 'c.obj']
>>> extfilter (myfiles, '.exe')
[]
>>> extfilter (myfiles, '')
['noext']
Note
this function does not accepts any kind of file patterns or regular expression.
Receives a list of files and one or more file-like patterns or a compiled regular expression and returns all the files _matching_ given pattern.
Please note that the same patterns as fnmatch2regex and glob are used, you might want to use pyke.path.extfilter() if you just want to filter files by extension.
See some examples below:
>>> pyke.path.filter (['a.txt', 'b/ba.txt', 'c.jpg'], '*.txt')
['a.txt', 'b/ba.txt']
>>> pyke.path.filter (['a.txt', 'b/ba.txt', 'c.jpg'], ['*.txt', '*/*.txt'])
['a.txt', 'b/ba.txt']
>>> pyke.path.filter (['a.txt', 'b/ba.txt', 'c.jpg'], '**.txt')
['a.txt', 'b/ba.txt']
>>> pyke.path.filter (['a.txt', 'b/ba.txt', 'c.jpg'], '?.txt')
['a.txt']
>>> pyke.path.filter (['a.txt', 'b/ba.txt', 'c.jpg'], re.compile ('^.*\.jpg$', re.I))
['c.jpg']
Transforms a filename match string or an array of filename matches to a regular expression object that will match given patterns.
It will return None when empty string, empty list or None is provided. A compiled regular expression will be returned unless an empty string, empty list or None is used as an argument, in which case None will be returned.
The advantage of this method is that when a list is provided, it will construct a single regular expression that will be able to match everything faster than doing multiple checks.
case_sensitive is used to specify wether the generated regular expression should be able to recognize file names in case sensitive (True) or case insensitive (False). In case no parameter is specified, it will automatically guess if it should be case sensitive or insensitive based on the current OS. (linux = True, win/mac=False)
For example:
>>> regex = fnmatch2regex ('*.txt')
>>> regex.match ('asdf.txt')
True
>>> regex.match ('file.dat')
False
>>> regex = fnmatch2regex (['*.txt', '*.TXT', '*asdf*.tx?'])
>>> regex.match ('asdf.txt')
True
>>> regex.match ('asdf.txz')
True
>>> regex.match ('woops.txz')
False
>>> regex = fnmatch2regex ('**/*.txt')
>>> regex.match ('file.txt')
False
>>> regex.match ('path/file.txt')
True
>>> regex = fnmatch2regex ('**.txt')
>>> regex.match ('file.txt')
True
>>> regex.match ('path/file.txt')
True
>>> regex.match ('path/to/file.txt')
True
Note
it works fairly similar to the way ‘fnmatch.translate’ works, but creates a regular expression that will work in any OS and any version of python.
Splits a filename pattern in a tuple of two elements, first one will contain any non-pattern parts, and the second the pattern part that should be matched, this is useful for a more efficient ‘glob’ function.
Examples:
>>> fnsplit ('*')
('', '*')
>>> fnsplit ('/path/to/*')
('/path/to/', '*')
>>> fnsplit ('/path/to/*.txt')
('/path/to/', '*.txt')
>>> fnsplit ('/path/*/file.txt')
('/path/', '*/file.txt')
>>> fnsplit ('path/to/dir0?/*.txt')
('path/to/', 'dir0?/*.txt')
Notify to the filesystem cache that something happended to some file, this way we can speed up things (really useful on windows).
Returns the access time for given file or list of files.
0 will be returned in case the file does not exists or we don’t have permissions to access it.
Returns the creation time for given file or list of files.
0 will be returned in case the file does not exists or we don’t have permissions to access it.
Note
when ctime is not available in the platform, this method will return the modification time value instead, to avoid raising an exception. You can always use pyke.path.getstat() if you need more grain-control.
Returns the current work directory for this process as a unicode string
Returns the file type for given path. It can return one of:
pyke.path.FILE_TYPE_NOT_EXISTS pyke.path.FILE_TYPE_VIRTUAL pyke.path.FILE_TYPE_FILE pyke.path.FILE_TYPE_DIR
Return the modification time for given file or list of files
0 will be returned in case the file does not exists or we don’t have permissions to access it.
Returns the size of given file or list of files, and returns 0 if the file does not exist.
When a list of paths is passed as a parameter, a list of sizes will be returned in the same order as the input list of paths.
Example:
>>> pyke.path.getsize ('myfile')
32
>>> pyke.path.getsize ('invalid-size')
0
>>> pyke.path.getsize (['myfile', 'other.txt', invalid-size'])
[32, 12345, 0]
Perform a stat operation on given path using a cache to save internal results
When a list of paths is passed, a list of stat structures will be returned.
Returns True when the path is absolute in the current operating system.
In case a list of paths is used, True will be returned when ALL paths are absolute.
>>> pyke.path.isabs ('my/path')
False
>>> pyke.path.isabs ('/my/path')
True
>>> pyke.path.isabs (['/my/path', '/my/other/abs/path'])
True
>>> pyke.path.isabs (['../my/path', '/abs/path'])
False
Returns True if given path exists in the file system and it is a dir.
This method accepts a list of paths, in which case True will be returned when all of them exist.
Check the examples below:
>>> pyke.path.isdir ('my-dir')
True
>>> pyke.path.isdir (['my-dir', 'my-other-dir'])
True
>>> pyke.path.isdir (['my-dir', 'myfile.txt'])
False
Returns True when given path string is not virtual and ends with a slash
Note
This method only checks the passed string, it does not rely on the operating system in any way. You might want to check pyke.path.isdir() in case you want to know if the directory exists.
Returns True if given path exists in the file system and it is a dir.
This method accepts a list of paths, in which case True will be returned when all of them exist.
Check the examples below:
>>> pyke.path.isfile ('dir/to/file.txt')
True
>>> pyke.path.isfile (['file.txt', 'other/path/to/img.jpg'])
True
>>> pyke.path.isfile (['my-dir', 'myfile.txt'])
False
>>> pyke.path.isfile (['myfile.txt', 'non-existing-file'])
False
Returns True if given path string is not virtual and ends with something different than a slash.
Note
This method only checks the passed string, it does not rely on the operating system in any way. You might want to check pyke.path.isfile() in case you want to know if the directory exists.
Returns True if given path is a symbolic link, in the same way os.path.islink works.
It accepts passing a list of paths and will return True if all paths are a link.
Example:
>>> pyke.path.islink ('my/path/to/link-file.txt')
True
>>> pyke.path.islink (['/path/to/link', '/path/to/link2'])
True
>>> pyke.path.islink (['/path/to/linkfile', '/path/to/file2'])
False
Returns True if given path is a mount point, in the same way os.path.ismount works.
It accepts passing a list of paths and will return True if all paths are a mount point.
Example:
>>> pyke.path.islink ('/media/disk-abcd/')
True
>>> pyke.path.islink (['/media/disk-abcd/', '/media/disk-asdf/'])
True
>>> pyke.path.islink (['/media/disk-abcd/', '/path/to/file2'])
False
Returns True if given pattern needs to recurse into more than one directory. For example, if the pattern looks like:
Basically, if there is a ‘?’, ‘**’ or ‘*’ before the slash, then it is recursive.
Identifies if given path is virtual (has been created with pyke.path.mkvirtfile())
Note
This method only checks the passed string, it does not rely on the operating system in any way. Virtual files are an internal representation used in pyke, they don’t have any other meaning outside pyke.
Returns True if given path is writable
Joins and normalizes one or more path components together.
In case one or more lists are passed, a list will be returned with all possible combinations of the joint paths.
Example:
>>> pyke.path.join ('path', 'to', 'file.txt')
'path/to/file.txt'
>>> pyke.path.join (['path'], 'to', 'somewhere.txt')
['path/to/somewhere.txt']
>>> pyke.path.join (['path'], '/overriden', 'somewhere.txt')
['/overriden/somewhere.txt']
>>> pyke.path.join (['first', 'second'], 'path/to', 'somewhere.txt')
['first/path/to/somewhere.txt', 'second/path/to/somewhere.txt']
Returns the last part of the path string. It is similar to basename but it returns the last folder if the path is a folder.
Accepts strings, file items and lists of strings or file items. Returns strings or list of strings.
Example:
>>> pyke.path.lastpath ('/path/to/something.txt')
'something.txt'
>>> pyke.path.lastpath ('/path/to/something/')
'something'
>>> pyke.path.lastpath (['/path/', '/another/path/to/file'])
['path', 'file']
Returns True if given path exists, in the same way os.path.lexists works.
It accepts passing a list of paths and will return True if all paths exist.
Example:
>>> pyke.path.lexists ('my/path/to/file.txt')
True
>>> pyke.path.lexists (['/path/to/file1', '/path/to/file2'])
True
>>> pyke.path.lexists (['/path/to/file1', '/path/to/file2', '/path/to/missing-file'])
False
Returns a unique digest for all the paths that have been passed as a parameter.
Please note this function should be deterministic, so passing [‘a’, ‘b’] and [‘b’, ‘a’] will produce the same output.
Creates a unique virtual file name which starts with a magic string that cannot represent any real path.
Note
Virtual files are an internal representation used in pyke, they don’t have any other meaning outside pyke.
OS independent function that returns the path representation of how a file is written in the filesystem.
For example, in linux returns the same path, but on MAC or Windows returns the filesystem path preserving case, in a consistent way, for the same path it always returns the same representation.
For example:
>>> pyke.path.normcase ('c:\windows\system32')
"c:\Windows\System32"
>>> pyke.path.normcase ('c:\windows\SYSTEM32')
"c:\Windows\System32"
>>> pyke.path.normcase ('C:\WiNDoWS\system32')
"c:\Windows\System32"
>>> pyke.path.normcase (["c:\Windows\", "c:\Windows\System32"])
["c:\Windows\", "c:\Windows\System32"]
Please note that calling os.path.normcase will always return the lowercase version of the path, but it won’t preserve the filesystem path.
With this function we have a consistent normalization of the case, but we preserve the chosen file system representation, for when a file was created.
NOTE: relative paths on case-insensitive filesystems are not normalized against the filesystem, but to keep consistency, file paths with the same name will be returned.
It makes sure given path is properly normalized and treated as a directory, by making sure it has a slash at the end.
Example:
>>> normdir('/path/to\dir')
'/path/to/dir/'
>>> normdir('/path/to/dir/')
'/path/to/dir/'
Normalize file path of given path string or path item.
Note
On case-insensitive systems it will try to produce a deterministic output by storing a cache of paths, so the same input path, in any case, will produce the same output path.
Note
Please note that if last character is a ‘slash’, it will return the normalized path with a slash added at the end
Simple function to ‘remap’ given path, from given folder to the other.
The difference between this method and a normal string replace, is that it normalize paths first, and it can act on lists and FileItems
Example:
>>> pyke.path.pathreplace ('/full/path/to/file', '/full/path/', '/another/path')
'/another/path/to/file'
Returns a random file name very unlikely to be repeated either during this execution, or other execution.
Example:
>>> pyke.path.randname ()
'c06c32098062852b42daea5065a6b3fe'
>>> pyke.path.randname (suffix = '.json')
'c0338d22fb726f624efa01f5c2b77d02.json'
>>> pyke.path.randname (prefix = 'MyPrefix-', suffix = '.xml')
'MyPrefix-792cb7c30f624f77d0ebfc2b8d2f5a0.xml'
Note
This does not create or does anything with any file
See also: pyke.path.tmpname, pyke.sysutil.mktmpdir, pyke.sysutil.mktmpfile
When a path string is passed, it returns the normalized canonical path to given location, or when a list of paths is passed, a list of normalized canonical paths is returned.
It works in the same way os.path.realpath works.
Return a relative filepath to path from given start directory.
If a list of paths is passed in either of the parameters, a list with the relative paths will be returned instead.
Please note that both path and start should be absolute dirs.
Example:
>>> pyke.path.relpath ('/path/to/dir/', '/path/to')
'dir/'
>>> pyke.path.relpath (['/path/to/f.txt', '/path/for/f2.txt'], '/path/to')
['dir/', '../for/f2.txt']
This is just an alias for os.path.samefile
Returns True if the underlying path1 and path2 refer to the same file and inode.
This is just an alias for os.path.sameopenfile
Returns True if the underlying fp1 and fp2 handles refer to the same file.
This is just an alias for os.path.samestat
Returns True if the underlying path1 and path2 refer to the same file and inode.
This is just an alias for os.path.split function.
This is just an alias for os.path.splitdrive function.
This is just an alias for os.path.splitext function.
Splits all components of the path in dirs and files Please note that the leading empty string means it is absolute
Example:
>>> pyke.path.splitpath ('/home/user/file.txt')
['', 'home', 'user', 'file.txt']
This is just an alias for os.path.splitunc function.
Returns a small hash for given string
The aim of this function is to use a fast hash function that minimizes the number of collisions between strings and minimizes the memory usage.
http://www.strchr.com/hash_functions http://programmers.stackexchange.com/questions/49550/which-hashing-algorithm-is-best-for-uniqueness-and-speed
CRC32 has good collision properties, but still, collisions happen md5 has great collision properties, but it is slower
Returns the temporary directory of the system.
See also: pyke.sysutil.mktmpdir, pyke.sysutil.mktmpfile
Returns a temporary name in the system temporary dir that is highly unlikely name to be repeated.
Example:
>>> pyke.path.tmpname ()
'/tmp/pyketmp-2852b4c06c065062a6b3f32098daea5e'
>>> pyke.path.tmpname (suffix = '.json')
'/tmp/pyketmp-f0078d22fb721f5c2b6f624fa077d0eb.json'
>>> pyke.path.tmpname (prefix = 'asdf', dir = '/my/tmp/dir/')
'/my/tmp/dir/asdf-792cb7c30f624f77d0ebfc2b8d2f5a0'
Note
although it is very very unlikely that this method returns the
same file name as any other method either in this execution or in another, it is not 100%% safe (maybe 99.9999...%% safe)
See also: pyke.path.randname, pyke.sysutil.mktmpdir, pyke.sysutil.mktmpfile
Returns top directory for given path. If the path is already a directory it will return the top directory of that.
Example:
>>> pyke.path.topdir ('/path/to/somewhere')
'/path/to/'
>>> pyke.path.topdir ('/path/to/')
'/path/'
>>> pyke.path.topdir ('/path')
'/'
>>> pyke.path.topdir ('/')
'/'
Returns a list of all the top directories for given path, including the current dir, if a dir is specified.
Example:
>>> pyke.path.topdirs('/path/to/file.txt')
['/path/to/', '/path/', '/']
A use of this method might be to find tools in projects easily:
>>> pyke.sysutil.findpath ('thirdparty/tool/bin/mytool', pyke.path.topdirs(os.getcwd()))
'/path/to/somewhere/thirdparty/tool/bin/mytool'
It calls the function visit with arguments (arg, dirname, names) for each directory in the directory tree rooted at path.
This is an alias for os.path.walk.
Add a path or a list of paths to the default search paths
Returns the hex digest after running the MD5 on given file
Change the mode of path
Compresses given map of source files inside target (removing it if already exists)
Each source file can have associated a list of destination files, so the same file might be compressed as a set of different files easily.
Example:
>>> pyke.sysutil.compress ('compressed.zip', {
'source/LICENSE.txt' : ['EULA', 'LICENSE']
'source/bin/app.exe' : 'mybindir/app.exe'
'source/bin/README.txt' : 'README'
})
>>> # This will generate a zip file with these files inside:
>>> # - mybindir/app.exe
>>> # - README
>>> # - EULA
>>> # - LICENSE
Creates target file by reading all sources and concatenating all of them.
When any of the source files could not be read, this command will fail and return -1
Returns 0 on success or -1 if the a read/write error occurrs and ignore_errors is not specified
It accepts many sources but only one destination, so you can only copy one file to another file, or several files to a folder. It also accepts file patterns as sources.
If the folder does not exist, this command will fail unless mkdirs is True.
replace is used to specify a replacement pair list or dict just as in filereplace function, so contents will be replaced during copy.
If the target file exists it will be overwritten unles ‘overwrite’ is set to False.
When copy_metadata is True, file permissions, dates, ... are preserved.
e.g:
>>> # copies 1,2,3 files inside folder/
>>> cp (['1', '2', '3'], 'folder/')
>>> # creates file '2' by copying contents of file '1'
>>> cp ('1', '2')
>>> # copies the file '1' to the file '2' even if file '2' already exists
>>> cp ('1', '2', overwrite = True)
>>> # will return True even if file copy fails
>>> cp (['1', '2', '3'], 'folder/', ignore_errors = True)
>>> # if 'folder' does not exist it will be created so files can be copied
>>> cp (['1', '2', '3'], 'folder/', mkdirs=True)
Copy all sources recursively into target dir overwriting files if needed.
exclude can be a path, a file pattern, a list combining both or a regular expression. Please note that exclude = ‘**.txt’ excludes all ‘txt’ files in any folder, whereas ‘.txt’ will probably do nothing, and os.path.join (source, ‘.txt’) will exclude all txt files in the source folder, but will not affect txt files in any of the subfolders being copied. This behaviour is on pourpose.
NOTE: copy_metadata has not been properly implemented, and at this point will only copy metadata from source files, but not dirs.
>> cptree (‘*.txt’, ‘target’)
# copy all files inside ‘dir’ to a directory named ‘target’, wether exists or not >> cptree (‘dir/’, ‘target/’)
Creates the target_file file with the downloaded contents from given source_url. The file is overwritten if already exists.
target_file is the binary file to be created
source_url is the download URL or Request object to download
ignore_errors should be set to True to ignore any error (please note that target_file will still be deleted)
Note
the contents of target_file file will be overwritten, and, in the case of any error, the
file will be removed (to avoid leaving the file inconsistent).
Downloads the contents from given URL.
Example:
>>> download2string ('http://www.valid-server.com/hello.html')
'<html><head><title>Hi!</head><body>Hello!!</body></html>'
>>> download2string ('http://invalid-url/hello.html')
-1
Properly escape arguments in all OSes
Runs given python function and returns a tuple containing (exit_code, stdout_string, stderr_string) Please note that 0 means success and other number an error.
Also, please note that to guarantee a proper execution of the python function in given directory only one function can be executed at a time in this process.
Runs a system command, and returns a tuple containing (exit_code, stdout_string, stderr_string), where exit_code = 0 means success.
responsive_safeprint allows this command to redirect all standard output and standard error to that function, line by line. If it is a tuple, the first element is the printer and the second is the extra argument.
When responsive_safeprint is provided, that function will be called using the same parameters as in safeprint function, for each new line that the command outputs, for both stderr and stdout. In that case, the stdout and stderr will still be returned for the caller of this function, but this way.
Opens given source file, replaces contents using replace_pairs and saves the result in target file.
replace_pairs can be either a dictionary or a list of tuples, where each key is the search string and the value is the replacement. List of tuples is encouraged in order to guarantee the replacement order.
Search keys can either be normal strings or compiled regular expressions, in which case, re.sub will be used instead.
If a list of files is passed, it will apply the replace operation on all files
NOTE: The whole file is readed in memory for doing replacement.
Example:
>>> filereplace ('input', 'output', { 'search' : 'replacement'})
0
>>> filereplace ('input', 'output', [('search', 'replacement') ])
0
>>> filereplace ('input', 'output', [(re.compile('search', re.I), 'replacement') ])
0
Returns the file size
Return all files under given path that match given file pattern. Please note that the
A file pattern accepts ‘?’ and ‘*’ as wildcard characters Recursive can be disabled to look for patterns in given directory only
Please note that folders themselves are not returned by this function.
# TODO: use a extra_files list so we can search for files that do not exist yet
Find given file or directory using given search path list.
does not exist on disk, but exist on this list, the item on this list would be returned.
more than once during the same execution. Please note that caching might not find the proper file if the file system has been updated.
Reads the whole file to memory and returns it back.
If format is json, the file will be parsed.
Finds path names matching input patterns for given absolute path or relative to current working dir.
Accepts a pattern string, a list of patterns, or a regular expression.
When a pattern or list of patterns is used, it is guessed wether the pattern trasverses any directory, in which case it will have iterate through all files. For regular expressions it always iterates recursively.
The pattern strings should be any file-like pattern including ‘?’ or ‘*’ or ‘**/’. If the pattern ends with a slash, then directories will be matched. If you need anything more complex, use a compiled regular expression.
It always returns a list of absolute paths for the matching files, even though internally it matches with relative paths instead.
Please note that this function might return an empty list if no files match given pattern or set of patterns. Also, take into account that when recursive patterns are specified, it can be very time consuming when applied to dirs with huge amount of files.
Examples:
>>> glob ('file.txt')
[ '/abs/path/to/file.txt' ]
>>> glob ('not-existing-file.txt')
[]
# text files in current dir
>>> glob ('*.txt')
[ '/abs/path/to/file.txt' ]
# text files inside a folder
>>> glob ('**/*.txt')
[
'/abs/path/to/DIR01/file.txt',
'/abs/path/to/DIR02/file.txt',
'/abs/path/to/DIRNN/file.txt'
]
# text files inside current dir or any folder
>>> glob (re.compile ('.*\.txt$', re.I))
[
'/abs/path/to/DIR01/file.txt',
'/abs/path/to/DIR02/file.txt',
'/abs/path/to/DIRNN/file.txt'
]
# *.txt in current dir, and all dirs
>>> glob (['*.txt', '**/'], '/abs/path/to')
[
'/abs/path/to/file.txt',
'/abs/path/to/DIR01/',
'/abs/path/to/DIR02/',
'/abs/path/to/DIR02/subdir/',
'/abs/path/to/DIRNN/'
]
Helper function that creates an expanded list of (src, target) pairs.
All source patterns are expanded and properly paired with a target dir.
Please note that when recursive is True, directory paths will also be returned.
Example:
>>> globmapping ('*.txt', 'mydir')
[ ('f1.txt', 'mydir/f1.txt'), ('f2.txt', 'mydir/f2.txt')]
>>> globmapping ('**.txt', 'mydir')
[ ('f.txt', 'mydir/f.txt'), ('d2/f.txt', 'mydir/d2/f.txt'), ...]
>>> globmapping ('path', 'mydir', recursive = True)
[
('path/', 'mydir/'),
('path/f.txt', 'mydir/f.txt'),
('path/d2/', 'mydir/d2/'),
('path/d2/f.txt', 'mydir/d2/f.txt'),
...
]
Accepts a path and a regular expression and recursively iterates through given path trying to match given regular expression against all files inside that path. If recursive is True, it will enter all subdirs.
Example:
>>> regex = re.compile ('^.*\.txt$', re.I)
>>> globregex (regex, '/path/')
['/path/to/a.txt', '/path/to/subdir/b.txt', '/path/j.txt']
Returns True if given command is a valid application name for current platform
Returns the hex digest after running the MD5 on given file
Returns the hex digest after running MD5 on given string
Create a directory (recursively) for each argument in list if the path does not exist.
Always returns True unless the final path is not a directory.
Create a temporary dir in the system in the most secure and safe way. This is an alias for tempfile.mkdtemp, so there should be no race conditions, and the directory should be readable, writable and searchable only by the current user.
Returns the temporary dir that has been created.
Example:
>>> pyke.sysutil.mktmpdir()
'/tmp/pyketmp-927182'
See also: pyke.path.tmpname, pyke.path.tmpdir
Returns a tuple containing an OS-level handle to an open file (as it would have been returned by os.open()) and the absolute pathname of that file.
You should delete the file when you are done with it.
Example:
>>> pyke.sysutil.mktmpfile ()
(37, '/tmp/pyketmp-8271662')
>>> (fd, fname) = pyke.sysutil.mktmpfile ()
>>> os.write (fd, 'whatever')
>>> os.close (fd)
>>> pyke.sysutil.rmfile (fname)
See also: pyke.path.tmpname, pyke.path.tmpdir
Recursively move a file or directory (source) to another location (target). If the destination is a directory or a symlink to a directory, then source is moved inside that directory.
Creates the file with given set of contents, but only if it does not exist yet
When ‘json’ format is passed, it converts contents to a json string
When append is set to True, it will append the contents to the end of the file.
Returns 0 on success or if the file already exists and is not forced
Removes a file or a set of files from the file system.
read-only files won’t be removed unless ignore_errors option is specified.
This method will always return 0 when ignore_errors is specified.
Even if it fails to remove a file, it will continue with the rest, it will only return 1 at the end of the function, but a failure won’t stop removing files.
Removes all paths on given path list from the filesystem, but only if there are no other files inside.
If there are files inside it won’t remove the directory.
In case ignore_errors is specified, it will return 0
Even if it fails to remove a directory, it will continue with the rest, it will only return 1 at the end of the function, but a failure won’t stop removing folders.
This is an alias for pyke.sysutil.rm
Removes all files in folder recursively
A dummy function that only sleeps given amount of seconds
Update the access and modification times of each file to the current time. If the file does not exist it will create it.
ignore_errors will be used only to return always 0
Uncompressed given compressed_file in given target dir (which will be created if not exists), by using any of the supported compression algorithms (zip, tar, tar.gz, tar.bz2).
Print given text (with colors) using ANSI standard (whenever possible).
On *nix it just means writing to standard output (or standard error) error = True means output on standard error
Transform a text string with color sequences to ANSI escape sequences Search for ‘ansi escape sequences’ for more information.
Locks a global mutex for printing terminal output
Print a string using a global mutex
This is a sample function that uses the ‘pyke.terminal’ module to print to standard output and standard error with colors.
Doing this, all output will be redirected to this function, and this function, according to the parameters should color the output accordingly.
Note: you can also define your own hook and do whatever you want with the output
Unlocks a global mutex for printing terminal output
Open given C/C++ file and analyze all the dependencies C/C++ dependencies (include files) by analyzing #include files
search_paths: list of paths where we can find included files explicit_deps: list of dependencies which might have been created by previous commands
This method uses the internal cache of the files in the current graph for faster lookups
Parse given input java file in order to deduce what would be the names of the output files generated by the ‘javac’ program Usually is the same javafile, unless it contains internal classes or interfaces in which case, new files will be created.
Analyze the file to extract all internal class names and interfaces A list is returned with all the classes of the form: [
‘ClassName01’, ‘ClassName02’, ‘ClassName02$Nested’, ‘ClassName02$Nested$Nested2’, ...
]