Fixed spelling
This commit is contained in:
parent
751f093de9
commit
6281a40335
12
CHANGELOG.md
12
CHANGELOG.md
|
|
@ -82,7 +82,7 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
|
|||
- The default output pattern now includes the `output_id` (%I)
|
||||
|
||||
### Fixed
|
||||
- Position files now defaults to use the auxiliar origin as KiCad.
|
||||
- Position files now defaults to use the auxiliary origin as KiCad.
|
||||
Can be disabled to use absolute coordinates. (#87)
|
||||
- Board View: flipped output. (#89)
|
||||
- Board View: problems with netnames using spaces. (#90)
|
||||
|
|
@ -130,7 +130,7 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
|
|||
|
||||
### Fixed
|
||||
- Problem when using E/DRC filters and the output dir didn't exist.
|
||||
- Not all errors during makefile generation were catched (got a stack trace).
|
||||
- Not all errors during makefile generation were caught (got a stack trace).
|
||||
- Output dirs created when generating a makefile for a compress target.
|
||||
- Problems with some SnapEDA libs (extra space in lib termination tag #57)
|
||||
- The "References" (plural) column is now coloured as "Reference" (singular)
|
||||
|
|
@ -268,8 +268,8 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
|
|||
|
||||
## [0.6.2] - 2020-08-25
|
||||
### Changed
|
||||
- Discarded spaces at the beggining and end of user fields when creating the
|
||||
internal BoM. They are ususally mistakes that prevents grouping components.
|
||||
- Discarded spaces at the beginning and end of user fields when creating the
|
||||
internal BoM. They are usually mistakes that prevents grouping components.
|
||||
|
||||
### Fixed
|
||||
- The variants logic for BoMs when a component resquested to be only added to
|
||||
|
|
@ -277,7 +277,7 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
|
|||
- Removed warnings about malformed values for DNF components indicating it in
|
||||
its value.
|
||||
- Problems with PcbDraw when generating PNG and JPG outputs. Now we use a more
|
||||
reliable conversion methode when available.
|
||||
reliable conversion method when available.
|
||||
|
||||
## [0.6.1] - 2020-08-20
|
||||
### Added
|
||||
|
|
@ -407,7 +407,7 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
|
|||
|
||||
### Fixed
|
||||
- All pcbnew plot formats generated gerber job files
|
||||
- Most formats that needed layers didn't complain when ommited
|
||||
- Most formats that needed layers didn't complain when omitted
|
||||
|
||||
## [0.2.4] - 2020-05-19
|
||||
### Changed
|
||||
|
|
|
|||
34
README.md
34
README.md
|
|
@ -8,7 +8,7 @@
|
|||
|
||||
**Important for KiCad 6 users**:
|
||||
- Only the code in the git repo supports KiCad 6 (no stable release yet)
|
||||
- The docker images taget `ki6` has KiCad 6, but you need to use the KiBot from the repo, not the one in the images.
|
||||
- The docker images target `ki6` has KiCad 6, but you need to use the KiBot from the repo, not the one in the images.
|
||||
- The docker image with KiCad 6 and KiBot that supports it is tagged as `dev_k6`
|
||||
- The GitHub action with KiCad 6 support is tagged as `v1_k6`
|
||||
- When using KiCad 6 you must migrate the whole project and pass the migrated files to KiBot.
|
||||
|
|
@ -319,7 +319,7 @@ This selection isn't stored in the PCB file. The global `units` value is used by
|
|||
|
||||
#### Output directory option
|
||||
|
||||
The `out_dir` option can define the base outut directory. This is the same as the `-d`/`--out-dir` command line option.
|
||||
The `out_dir` option can define the base output directory. This is the same as the `-d`/`--out-dir` command line option.
|
||||
Note that the command line option has precedence over it.
|
||||
|
||||
Expansion patterns are applied to this value, but you should avoid using patterns that expand according to the context, i.e. **%c**, **%d**, **%f**, **%F**, **%p** and **%r**.
|
||||
|
|
@ -379,7 +379,7 @@ Both concepts are closely related. In fact variants can use filters.
|
|||
The current implementation of the filters allow to exclude components from some of the processing stages. The most common use is to exclude them from some output.
|
||||
In the future more advanced filters will allow modification of component details.
|
||||
|
||||
Variants are currently used to create *assembly variants*. This concept is used to manufature one PCB used for various products.
|
||||
Variants are currently used to create *assembly variants*. This concept is used to manufacture one PCB used for various products.
|
||||
You can learn more about KiBot variants on the following [example repo](https://inti-cmnb.github.io/kibot_variants_arduprog/).
|
||||
|
||||
As mentioned above the current use of filters is to mark some components. Mainly to exclude them, but also to mark them as special.
|
||||
|
|
@ -412,12 +412,12 @@ Currently the only type available is `generic`.
|
|||
- generic: Generic filter
|
||||
This filter is based on regular expressions.
|
||||
It also provides some shortcuts for common situations.
|
||||
Note that matches aren't case sensitive and spaces at the beggining and the end are removed.
|
||||
Note that matches aren't case sensitive and spaces at the beginning and the end are removed.
|
||||
The internal `_mechanical` filter emulates the KiBoM behavior for default exclusions.
|
||||
The internal `_kicost_dnp` filter emulates KiCost's `dnp` field.
|
||||
* Valid keys:
|
||||
- `comment`: [string=''] A comment for documentation purposes.
|
||||
- `config_field`: [string='Config'] Name of the field used to clasify components.
|
||||
- `config_field`: [string='Config'] Name of the field used to classify components.
|
||||
- `config_separators`: [string=' ,'] Characters used to separate options inside the config field.
|
||||
- `exclude_all_hash_ref`: [boolean=false] Exclude all components with a reference starting with #.
|
||||
- `exclude_any`: [list(dict)] A series of regular expressions used to exclude parts.
|
||||
|
|
@ -495,12 +495,12 @@ Currently the only type available is `generic`.
|
|||
- `split_fields`: [list(string)] List of fields to split, usually the distributors part numbers.
|
||||
- `split_fields_expand`: [boolean=false] When `true` the fields in `split_fields` are added to the internal names.
|
||||
- `use_ref_sep_for_first`: [boolean=true] Force the reference separator use even for the first component in the list (KiCost behavior).
|
||||
- `value_alt_field`: [string='value_subparts'] Field containing replacements for the `Value` field. So we get real values for splitted parts.
|
||||
- `value_alt_field`: [string='value_subparts'] Field containing replacements for the `Value` field. So we get real values for split parts.
|
||||
- var_rename: Var_Rename
|
||||
This filter implements the VARIANT:FIELD=VALUE renamer to get FIELD=VALUE when VARIANT is in use.
|
||||
* Valid keys:
|
||||
- `comment`: [string=''] A comment for documentation purposes.
|
||||
- `force_variant`: [string=''] Use this variant instead of the current variant. Usefull for IBoM variants.
|
||||
- `force_variant`: [string=''] Use this variant instead of the current variant. Useful for IBoM variants.
|
||||
- `name`: [string=''] Used to identify this particular filter definition.
|
||||
- `separator`: [string=':'] Separator used between the variant and the field name.
|
||||
- `variant_to_value`: [boolean=false] Rename fields matching the variant to the value of the component.
|
||||
|
|
@ -802,7 +802,7 @@ Next time you need this list just use an alias, like this:
|
|||
* Valid keys:
|
||||
- `file`: [string=''] Name of the schematic to aggregate.
|
||||
- `name`: [string=''] Name to identify this source. If empty we use the name of the schematic.
|
||||
- `number`: [number=1] Number of boards to build (components multiplier). Use negative to substract.
|
||||
- `number`: [number=1] Number of boards to build (components multiplier). Use negative to subtract.
|
||||
- `ref_id`: [string=''] A prefix to add to all the references from this project.
|
||||
- `angle_positive`: [boolean=true] Always use positive values for the footprint rotation.
|
||||
- `bottom_negative_x`: [boolean=false] Use negative X coordinates for footprints on bottom layer (for XYRS).
|
||||
|
|
@ -1155,7 +1155,7 @@ Next time you need this list just use an alias, like this:
|
|||
- `plot_footprint_refs`: [boolean=true] Include the footprint references.
|
||||
- `plot_footprint_values`: [boolean=true] Include the footprint values.
|
||||
- `plot_sheet_reference`: [boolean=false] Currently without effect.
|
||||
- `subtract_mask_from_silk`: [boolean=false] Substract the solder mask from the silk screen.
|
||||
- `subtract_mask_from_silk`: [boolean=false] Subtract the solder mask from the silk screen.
|
||||
- `tent_vias`: [boolean=true] Cover the vias.
|
||||
- `uppercase_extensions`: [boolean=false] Use uppercase names for the extensions.
|
||||
- `use_aux_axis_as_origin`: [boolean=false] Use the auxiliary axis as origin for coordinates.
|
||||
|
|
@ -1486,7 +1486,7 @@ Next time you need this list just use an alias, like this:
|
|||
|
||||
* PDF (Portable Document Format)
|
||||
* Type: `pdf`
|
||||
* Description: Exports the PCB to the most common exhange format. Suitable for printing.
|
||||
* Description: Exports the PCB to the most common exchange format. Suitable for printing.
|
||||
Note that this output isn't the best for documating your project.
|
||||
This output is what you get from the File/Plot menu in pcbnew.
|
||||
* Valid keys:
|
||||
|
|
@ -1557,7 +1557,7 @@ Next time you need this list just use an alias, like this:
|
|||
|
||||
* PDF PCB Print (Portable Document Format)
|
||||
* Type: `pdf_pcb_print`
|
||||
* Description: Exports the PCB to the most common exhange format. Suitable for printing.
|
||||
* Description: Exports the PCB to the most common exchange format. Suitable for printing.
|
||||
This is the main format to document your PCB.
|
||||
This output is what you get from the 'File/Print' menu in pcbnew.
|
||||
* Valid keys:
|
||||
|
|
@ -1598,7 +1598,7 @@ Next time you need this list just use an alias, like this:
|
|||
|
||||
* PDF Schematic Print (Portable Document Format)
|
||||
* Type: `pdf_sch_print`
|
||||
* Description: Exports the PCB to the most common exhange format. Suitable for printing.
|
||||
* Description: Exports the PCB to the most common exchange format. Suitable for printing.
|
||||
This is the main format to document your schematic.
|
||||
This output is what you get from the 'File/Print' menu in eeschema.
|
||||
* Valid keys:
|
||||
|
|
@ -1624,7 +1624,7 @@ Next time you need this list just use an alias, like this:
|
|||
* Pick & place
|
||||
* Type: `position`
|
||||
* Description: Generates the file with position information for the PCB components, used by the pick and place machine.
|
||||
This output is what you get from the 'File/Fabrication output/Footprint poistion (.pos) file' menu in pcbnew.
|
||||
This output is what you get from the 'File/Fabrication output/Footprint position (.pos) file' menu in pcbnew.
|
||||
* Valid keys:
|
||||
- `comment`: [string=''] A comment for documentation purposes.
|
||||
- `dir`: [string='./'] Output directory for the generated files. If it starts with `+` the rest is concatenated to the default dir.
|
||||
|
|
@ -1639,7 +1639,7 @@ Next time you need this list just use an alias, like this:
|
|||
- `columns`: [list(dict)|list(string)] Which columns are included in the output.
|
||||
* Valid keys:
|
||||
- `id`: [string=''] [Ref,Val,Package,PosX,PosY,Rot,Side] Internal name.
|
||||
- `name`: [string=''] Name to use in the outut file. The id is used when empty.
|
||||
- `name`: [string=''] Name to use in the output file. The id is used when empty.
|
||||
- `dnf_filter`: [string|list(string)='_none'] Name of the filter to mark components as not fitted.
|
||||
A short-cut to use for simple cases where a variant is an overkill.
|
||||
- `format`: [string='ASCII'] [ASCII,CSV] Format for the position file.
|
||||
|
|
@ -2136,7 +2136,7 @@ But looking at the 1 k resistors is harder. We have 80, three from *merge_1*, on
|
|||
So we have 10*3+20*3+30=120, this is clear, but the BoM says they are R1-R3 R2-R4 R5, which is a little bit confusing.
|
||||
In this simple example is easy to correlate R1-R3 to *merge_1*, R2-R4 to *merge_2* and R5 to *merge_1*.
|
||||
For bigger projects this gets harder.
|
||||
Lets assing an *id* to each project, we'll use 'A' for *merge_1*, 'B' for *merge_2* and 'C' for *merge_3*:
|
||||
Lets assign an *id* to each project, we'll use 'A' for *merge_1*, 'B' for *merge_2* and 'C' for *merge_3*:
|
||||
|
||||
```yaml
|
||||
kibot:
|
||||
|
|
@ -2278,7 +2278,7 @@ import:
|
|||
```
|
||||
|
||||
This will import all outputs and filters, but not variants or globals.
|
||||
Also note that imported globals has more precendence than the ones defined in the same file.
|
||||
Also note that imported globals has more precedence than the ones defined in the same file.
|
||||
|
||||
## Usage
|
||||
|
||||
|
|
@ -2321,7 +2321,7 @@ pcb_files:
|
|||
kibot -b $(PCB) -c $(KIBOT_CFG)
|
||||
```
|
||||
|
||||
If you need to supress messages use `--quiet` or `-q` and if you need to get more informatio about what's going on use `--verbose` or `-v`.
|
||||
If you need to suppress messages use `--quiet` or `-q` and if you need to get more information about what's going on use `--verbose` or `-v`.
|
||||
|
||||
If you want to generate only some of the outputs use:
|
||||
|
||||
|
|
|
|||
|
|
@ -8,7 +8,7 @@
|
|||
|
||||
**Important for KiCad 6 users**:
|
||||
- Only the code in the git repo supports KiCad 6 (no stable release yet)
|
||||
- The docker images taget `ki6` has KiCad 6, but you need to use the KiBot from the repo, not the one in the images.
|
||||
- The docker images target `ki6` has KiCad 6, but you need to use the KiBot from the repo, not the one in the images.
|
||||
- The docker image with KiCad 6 and KiBot that supports it is tagged as `dev_k6`
|
||||
- The GitHub action with KiCad 6 support is tagged as `v1_k6`
|
||||
- When using KiCad 6 you must migrate the whole project and pass the migrated files to KiBot.
|
||||
|
|
@ -254,7 +254,7 @@ This selection isn't stored in the PCB file. The global `units` value is used by
|
|||
|
||||
#### Output directory option
|
||||
|
||||
The `out_dir` option can define the base outut directory. This is the same as the `-d`/`--out-dir` command line option.
|
||||
The `out_dir` option can define the base output directory. This is the same as the `-d`/`--out-dir` command line option.
|
||||
Note that the command line option has precedence over it.
|
||||
|
||||
Expansion patterns are applied to this value, but you should avoid using patterns that expand according to the context, i.e. **%c**, **%d**, **%f**, **%F**, **%p** and **%r**.
|
||||
|
|
@ -314,7 +314,7 @@ Both concepts are closely related. In fact variants can use filters.
|
|||
The current implementation of the filters allow to exclude components from some of the processing stages. The most common use is to exclude them from some output.
|
||||
In the future more advanced filters will allow modification of component details.
|
||||
|
||||
Variants are currently used to create *assembly variants*. This concept is used to manufature one PCB used for various products.
|
||||
Variants are currently used to create *assembly variants*. This concept is used to manufacture one PCB used for various products.
|
||||
You can learn more about KiBot variants on the following [example repo](https://inti-cmnb.github.io/kibot_variants_arduprog/).
|
||||
|
||||
As mentioned above the current use of filters is to mark some components. Mainly to exclude them, but also to mark them as special.
|
||||
|
|
@ -777,7 +777,7 @@ But looking at the 1 k resistors is harder. We have 80, three from *merge_1*, on
|
|||
So we have 10*3+20*3+30=120, this is clear, but the BoM says they are R1-R3 R2-R4 R5, which is a little bit confusing.
|
||||
In this simple example is easy to correlate R1-R3 to *merge_1*, R2-R4 to *merge_2* and R5 to *merge_1*.
|
||||
For bigger projects this gets harder.
|
||||
Lets assing an *id* to each project, we'll use 'A' for *merge_1*, 'B' for *merge_2* and 'C' for *merge_3*:
|
||||
Lets assign an *id* to each project, we'll use 'A' for *merge_1*, 'B' for *merge_2* and 'C' for *merge_3*:
|
||||
|
||||
```yaml
|
||||
kibot:
|
||||
|
|
@ -919,7 +919,7 @@ import:
|
|||
```
|
||||
|
||||
This will import all outputs and filters, but not variants or globals.
|
||||
Also note that imported globals has more precendence than the ones defined in the same file.
|
||||
Also note that imported globals has more precedence than the ones defined in the same file.
|
||||
|
||||
## Usage
|
||||
|
||||
|
|
@ -962,7 +962,7 @@ pcb_files:
|
|||
kibot -b $(PCB) -c $(KIBOT_CFG)
|
||||
```
|
||||
|
||||
If you need to supress messages use `--quiet` or `-q` and if you need to get more informatio about what's going on use `--verbose` or `-v`.
|
||||
If you need to suppress messages use `--quiet` or `-q` and if you need to get more information about what's going on use `--verbose` or `-v`.
|
||||
|
||||
If you want to generate only some of the outputs use:
|
||||
|
||||
|
|
|
|||
|
|
@ -82,7 +82,7 @@ outputs:
|
|||
- file: ''
|
||||
# [string=''] Name to identify this source. If empty we use the name of the schematic
|
||||
name: ''
|
||||
# [number=1] Number of boards to build (components multiplier). Use negative to substract
|
||||
# [number=1] Number of boards to build (components multiplier). Use negative to subtract
|
||||
number: 1
|
||||
# [string=''] A prefix to add to all the references from this project
|
||||
ref_id: ''
|
||||
|
|
@ -517,7 +517,7 @@ outputs:
|
|||
plot_footprint_values: true
|
||||
# [boolean=false] Currently without effect
|
||||
plot_sheet_reference: false
|
||||
# [boolean=false] Substract the solder mask from the silk screen
|
||||
# [boolean=false] Subtract the solder mask from the silk screen
|
||||
subtract_mask_from_silk: false
|
||||
# [boolean=true] Cover the vias
|
||||
tent_vias: true
|
||||
|
|
@ -925,7 +925,7 @@ outputs:
|
|||
# Note that this output isn't the best for documating your project.
|
||||
# This output is what you get from the File/Plot menu in pcbnew.
|
||||
- name: 'pdf_example'
|
||||
comment: 'Exports the PCB to the most common exhange format. Suitable for printing.'
|
||||
comment: 'Exports the PCB to the most common exchange format. Suitable for printing.'
|
||||
type: 'pdf'
|
||||
dir: 'Example/pdf_dir'
|
||||
options:
|
||||
|
|
@ -978,7 +978,7 @@ outputs:
|
|||
# This is the main format to document your PCB.
|
||||
# This output is what you get from the 'File/Print' menu in pcbnew.
|
||||
- name: 'pdf_pcb_print_example'
|
||||
comment: 'Exports the PCB to the most common exhange format. Suitable for printing.'
|
||||
comment: 'Exports the PCB to the most common exchange format. Suitable for printing.'
|
||||
type: 'pdf_pcb_print'
|
||||
dir: 'Example/pdf_pcb_print_dir'
|
||||
options:
|
||||
|
|
@ -1016,7 +1016,7 @@ outputs:
|
|||
# This is the main format to document your schematic.
|
||||
# This output is what you get from the 'File/Print' menu in eeschema.
|
||||
- name: 'pdf_sch_print_example'
|
||||
comment: 'Exports the PCB to the most common exhange format. Suitable for printing.'
|
||||
comment: 'Exports the PCB to the most common exchange format. Suitable for printing.'
|
||||
type: 'pdf_sch_print'
|
||||
dir: 'Example/pdf_sch_print_dir'
|
||||
options:
|
||||
|
|
@ -1033,7 +1033,7 @@ outputs:
|
|||
# Not fitted components are crossed
|
||||
variant: ''
|
||||
# Pick & place:
|
||||
# This output is what you get from the 'File/Fabrication output/Footprint poistion (.pos) file' menu in pcbnew.
|
||||
# This output is what you get from the 'File/Fabrication output/Footprint position (.pos) file' menu in pcbnew.
|
||||
- name: 'position_example'
|
||||
comment: 'Generates the file with position information for the PCB components, used by the pick and place machine.'
|
||||
type: 'position'
|
||||
|
|
@ -1045,7 +1045,7 @@ outputs:
|
|||
columns:
|
||||
# [string=''] [Ref,Val,Package,PosX,PosY,Rot,Side] Internal name
|
||||
- id: 'Ref'
|
||||
# [string=''] Name to use in the outut file. The id is used when empty
|
||||
# [string=''] Name to use in the output file. The id is used when empty
|
||||
name: 'Reference'
|
||||
# [string|list(string)='_none'] Name of the filter to mark components as not fitted.
|
||||
# A short-cut to use for simple cases where a variant is an overkill
|
||||
|
|
|
|||
|
|
@ -280,7 +280,7 @@ def detect_kicad():
|
|||
# Bug in KiCad (#6989), prints to stderr:
|
||||
# `../src/common/stdpbase.cpp(62): assert "traits" failed in Get(test_dir): create wxApp before calling this`
|
||||
# Found in KiCad 5.1.8, 5.1.9
|
||||
# So we temporarily supress stderr
|
||||
# So we temporarily suppress stderr
|
||||
with hide_stderr():
|
||||
GS.kicad_conf_path = pcbnew.GetKicadConfigPath()
|
||||
GS.pro_ext = '.pro'
|
||||
|
|
|
|||
|
|
@ -182,7 +182,7 @@ class ComponentGroup(object):
|
|||
def add_component(self, c):
|
||||
""" Add a component to the group.
|
||||
Avoid repetition, checks if suitable.
|
||||
Note: repeated components happend when a component contains more than one unit """
|
||||
Note: repeated components happens when a component contains more than one unit """
|
||||
if not self.components:
|
||||
self.components.append(c)
|
||||
self.refs[c.ref+c.project] = c
|
||||
|
|
@ -382,7 +382,7 @@ def get_value_sort(comp, fallback_ref=False):
|
|||
if res:
|
||||
value, (mult, mult_s), unit = res
|
||||
if comp.ref_prefix in "CL":
|
||||
# fempto Farads
|
||||
# femto Farads
|
||||
value = "{0:15d}".format(int(value * 1e15 * mult + 0.1))
|
||||
else:
|
||||
# milli Ohms
|
||||
|
|
@ -473,7 +473,7 @@ def group_components(cfg, components):
|
|||
y_origin = 0.0
|
||||
if cfg.use_aux_axis_as_origin:
|
||||
(x_origin, y_origin) = GS.get_aux_origin()
|
||||
logger.debug('Using auxiliar origin: x={} y={}'.format(x_origin, y_origin))
|
||||
logger.debug('Using auxiliary origin: x={} y={}'.format(x_origin, y_origin))
|
||||
# Process the groups
|
||||
for g in groups:
|
||||
# Sort the references within each group
|
||||
|
|
|
|||
|
|
@ -72,7 +72,7 @@ def write_csv(filename, ext, groups, headings, head_names, cfg):
|
|||
head_names = [list of headings to display in the BoM file]
|
||||
cfg = BoMOptions object with all the configuration
|
||||
"""
|
||||
# Delimeter is assumed from file extension
|
||||
# Delimiter is assumed from file extension
|
||||
# Override delimiter if separator specified
|
||||
if ext == "csv" and cfg.csv.separator:
|
||||
delimiter = cfg.csv.separator
|
||||
|
|
|
|||
|
|
@ -90,7 +90,7 @@ def get_prefix(prefix):
|
|||
if prefix in PREFIX_GIGA:
|
||||
return 1.0e9, 'G'
|
||||
# Unknown, we shouldn't get here because the regex matched
|
||||
# BUT: I found that sometimes unexpected things happend, like mu matching micro and then we reaching this code
|
||||
# BUT: I found that sometimes unexpected things happen, like mu matching micro and then we reaching this code
|
||||
# Now is fixed, but I can't be sure some bizarre case is overlooked
|
||||
logger.error('Unknown prefix, please report')
|
||||
return 1, ''
|
||||
|
|
@ -193,7 +193,7 @@ def compare_values(c1, c2):
|
|||
# Values match
|
||||
if u1 == u2:
|
||||
return True # Units match
|
||||
# No longer posible because now we use the prefix to determine absent units
|
||||
# No longer possible because now we use the prefix to determine absent units
|
||||
# if not u1:
|
||||
# return True # No units for component 1
|
||||
# if not u2:
|
||||
|
|
|
|||
|
|
@ -240,12 +240,12 @@ class CfgYamlReader(object):
|
|||
if outs is None and explicit_outs and 'outputs' not in data:
|
||||
logger.warning(W_NOOUTPUTS+"No outputs found in `{}`".format(fn_rel))
|
||||
|
||||
def _parse_import_filters(self, fils, explicit_fils, fn_rel, data):
|
||||
if (fils is None or len(fils) > 0) and 'filters' in data:
|
||||
def _parse_import_filters(self, filters, explicit_fils, fn_rel, data):
|
||||
if (filters is None or len(filters) > 0) and 'filters' in data:
|
||||
i_fils = self._parse_filters(data['filters'])
|
||||
if fils is not None:
|
||||
if filters is not None:
|
||||
sel_fils = {}
|
||||
for f in fils:
|
||||
for f in filters:
|
||||
if f in i_fils:
|
||||
sel_fils[f] = i_fils[f]
|
||||
else:
|
||||
|
|
@ -257,7 +257,7 @@ class CfgYamlReader(object):
|
|||
else:
|
||||
RegOutput.add_filters(sel_fils)
|
||||
logger.debug('Filters loaded from `{}`: {}'.format(fn_rel, sel_fils.keys()))
|
||||
if fils is None and explicit_fils and 'filters' not in data:
|
||||
if filters is None and explicit_fils and 'filters' not in data:
|
||||
logger.warning(W_NOFILTERS+"No filters found in `{}`".format(fn_rel))
|
||||
|
||||
def _parse_import_variants(self, vars, explicit_vars, fn_rel, data):
|
||||
|
|
@ -313,7 +313,7 @@ class CfgYamlReader(object):
|
|||
if isinstance(entry, str):
|
||||
fn = entry
|
||||
outs = None
|
||||
fils = []
|
||||
filters = []
|
||||
vars = []
|
||||
globals = []
|
||||
explicit_outs = True
|
||||
|
|
@ -321,7 +321,7 @@ class CfgYamlReader(object):
|
|||
explicit_vars = False
|
||||
explicit_globals = False
|
||||
elif isinstance(entry, dict):
|
||||
fn = outs = fils = vars = globals = None
|
||||
fn = outs = filters = vars = globals = None
|
||||
explicit_outs = explicit_fils = explicit_vars = explicit_globals = False
|
||||
for k, v in entry.items():
|
||||
if k == 'file':
|
||||
|
|
@ -332,7 +332,7 @@ class CfgYamlReader(object):
|
|||
outs = self._parse_import_items('outputs', fn, v)
|
||||
explicit_outs = True
|
||||
elif k == 'filters':
|
||||
fils = self._parse_import_items('filters', fn, v)
|
||||
filters = self._parse_import_items('filters', fn, v)
|
||||
explicit_fils = True
|
||||
elif k == 'variants':
|
||||
vars = self._parse_import_items('variants', fn, v)
|
||||
|
|
@ -355,7 +355,7 @@ class CfgYamlReader(object):
|
|||
# Outputs
|
||||
self._parse_import_outputs(outs, explicit_outs, fn_rel, data)
|
||||
# Filters
|
||||
self._parse_import_filters(fils, explicit_fils, fn_rel, data)
|
||||
self._parse_import_filters(filters, explicit_fils, fn_rel, data)
|
||||
# Variants
|
||||
self._parse_import_variants(vars, explicit_vars, fn_rel, data)
|
||||
# Globals
|
||||
|
|
@ -392,7 +392,7 @@ class CfgYamlReader(object):
|
|||
# List of outputs
|
||||
version = None
|
||||
globals_found = False
|
||||
# Analize each section
|
||||
# Analyze each section
|
||||
for k, v in data.items():
|
||||
# logger.debug('{} {}'.format(k, v))
|
||||
if k == 'kiplot' or k == 'kibot':
|
||||
|
|
@ -490,7 +490,7 @@ def print_output_options(name, cl, indent):
|
|||
ind_help = len(preface)*' '
|
||||
for ln in range(1, clines):
|
||||
text = lines[ln].strip()
|
||||
# Dots at the beggining are replaced by spaces.
|
||||
# Dots at the beginning are replaced by spaces.
|
||||
# Used to keep indentation.
|
||||
if text[0] == '.':
|
||||
for i in range(1, len(text)):
|
||||
|
|
@ -538,10 +538,10 @@ def print_output_help(name):
|
|||
|
||||
|
||||
def print_preflights_help():
|
||||
pres = BasePreFlight.get_registered()
|
||||
logger.debug('{} supported preflights'.format(len(pres)))
|
||||
prefs = BasePreFlight.get_registered()
|
||||
logger.debug('{} supported preflights'.format(len(prefs)))
|
||||
print('Supported preflight options:\n')
|
||||
for n, o in OrderedDict(sorted(pres.items())).items():
|
||||
for n, o in OrderedDict(sorted(prefs.items())).items():
|
||||
help, options = o.get_doc()
|
||||
if help is None:
|
||||
help = 'Undocumented'
|
||||
|
|
@ -551,10 +551,10 @@ def print_preflights_help():
|
|||
|
||||
|
||||
def print_filters_help():
|
||||
fils = RegFilter.get_registered()
|
||||
logger.debug('{} supported filters'.format(len(fils)))
|
||||
filters = RegFilter.get_registered()
|
||||
logger.debug('{} supported filters'.format(len(filters)))
|
||||
print('Supported filters:\n')
|
||||
for n, o in OrderedDict(sorted(fils.items())).items():
|
||||
for n, o in OrderedDict(sorted(filters.items())).items():
|
||||
help = o.__doc__
|
||||
if help is None:
|
||||
help = 'Undocumented'
|
||||
|
|
@ -582,7 +582,7 @@ def print_example_options(f, cls, name, indent, po, is_list=False):
|
|||
if help:
|
||||
help_lines = help.split('\n')
|
||||
for hl in help_lines:
|
||||
# Dots at the beggining are replaced by spaces.
|
||||
# Dots at the beginning are replaced by spaces.
|
||||
# Used to keep indentation.
|
||||
hl = hl.strip()
|
||||
if hl[0] == '.':
|
||||
|
|
@ -641,8 +641,8 @@ def create_example(pcb_file, out_dir, copy_options, copy_expand):
|
|||
f.write('kibot:\n version: 1\n')
|
||||
# Preflights
|
||||
f.write('\npreflight:\n')
|
||||
pres = BasePreFlight.get_registered()
|
||||
for n, o in OrderedDict(sorted(pres.items())).items():
|
||||
prefs = BasePreFlight.get_registered()
|
||||
for n, o in OrderedDict(sorted(prefs.items())).items():
|
||||
if o.__doc__:
|
||||
lines = trim(o.__doc__.rstrip()+'.')
|
||||
for ln in lines:
|
||||
|
|
|
|||
|
|
@ -30,7 +30,7 @@ class Generic(BaseFilter): # noqa: F821
|
|||
""" Generic filter
|
||||
This filter is based on regular expressions.
|
||||
It also provides some shortcuts for common situations.
|
||||
Note that matches aren't case sensitive and spaces at the beggining and the end are removed.
|
||||
Note that matches aren't case sensitive and spaces at the beginning and the end are removed.
|
||||
The internal `_mechanical` filter emulates the KiBoM behavior for default exclusions.
|
||||
The internal `_kicost_dnp` filter emulates KiCost's `dnp` field """
|
||||
def __init__(self):
|
||||
|
|
@ -53,7 +53,7 @@ class Generic(BaseFilter): # noqa: F821
|
|||
self.exclude_value = False
|
||||
""" Exclude components if their 'Value' is any of the keys """
|
||||
self.config_field = 'Config'
|
||||
""" Name of the field used to clasify components """
|
||||
""" Name of the field used to classify components """
|
||||
self.config_separators = ' ,'
|
||||
""" Characters used to separate options inside the config field """
|
||||
self.exclude_config = False
|
||||
|
|
|
|||
|
|
@ -58,7 +58,7 @@ class Subparts(BaseFilter): # noqa: F821
|
|||
self.use_ref_sep_for_first = True
|
||||
""" Force the reference separator use even for the first component in the list (KiCost behavior) """
|
||||
self.value_alt_field = 'value_subparts'
|
||||
""" Field containing replacements for the `Value` field. So we get real values for splitted parts """
|
||||
""" Field containing replacements for the `Value` field. So we get real values for split parts """
|
||||
|
||||
def config(self, parent):
|
||||
super().config(parent)
|
||||
|
|
@ -139,13 +139,13 @@ class Subparts(BaseFilter): # noqa: F821
|
|||
except ValueError:
|
||||
logger.error('Internal error qty_to_float("{}"), please report'.format(qty))
|
||||
|
||||
def do_split(self, comp, max_num_subparts, splitted_fields):
|
||||
def do_split(self, comp, max_num_subparts, split_fields):
|
||||
""" Split `comp` according to the detected subparts """
|
||||
# Split it
|
||||
multi_part = max_num_subparts > 1
|
||||
if multi_part and GS.debug_level > 1:
|
||||
logger.debug("Splitting {} in {} subparts".format(comp.ref, max_num_subparts))
|
||||
splitted = []
|
||||
split = []
|
||||
# Compute the total for the modified value
|
||||
total_parts = max_num_subparts if self.modify_first_value else max_num_subparts-1
|
||||
# Check if we have replacements for the `Value` field
|
||||
|
|
@ -161,7 +161,7 @@ class Subparts(BaseFilter): # noqa: F821
|
|||
if self.use_ref_sep_for_first:
|
||||
new_comp.ref = new_comp.ref+self.ref_sep+str(i+1)
|
||||
elif i > 0:
|
||||
# I like it better. The first is usually the real component, the rest are accesories.
|
||||
# I like it better. The first is usually the real component, the rest are accessories.
|
||||
new_comp.ref = new_comp.ref+self.ref_sep+str(i)
|
||||
# Adjust the suffix to be "sort friendly"
|
||||
# Currently useless, but could help in the future
|
||||
|
|
@ -180,10 +180,10 @@ class Subparts(BaseFilter): # noqa: F821
|
|||
prev_qty = None
|
||||
prev_field = None
|
||||
max_qty = 0
|
||||
if not self.check_multiplier.intersection(splitted_fields):
|
||||
if not self.check_multiplier.intersection(split_fields):
|
||||
# No field to check for qty here, default to 1
|
||||
max_qty = 1
|
||||
for field, values in splitted_fields.items():
|
||||
for field, values in split_fields.items():
|
||||
check_multiplier = field in self.check_multiplier
|
||||
value = ''
|
||||
qty = '1'
|
||||
|
|
@ -203,23 +203,23 @@ class Subparts(BaseFilter): # noqa: F821
|
|||
new_comp.set_field(field+'_qty', qty)
|
||||
max_qty = max(max_qty, self.qty_to_float(qty))
|
||||
new_comp.qty = max_qty
|
||||
splitted.append(new_comp)
|
||||
split.append(new_comp)
|
||||
if not multi_part and int(max_qty) == 1:
|
||||
# No real split and no multiplier
|
||||
return
|
||||
if GS.debug_level > 2:
|
||||
logger.debug('Old component: '+comp.ref+' '+str([str(f) for f in comp.fields]))
|
||||
logger.debug('Fields to split: '+str(splitted_fields))
|
||||
logger.debug('Fields to split: '+str(split_fields))
|
||||
logger.debug('New components:')
|
||||
for c in splitted:
|
||||
for c in split:
|
||||
logger.debug(' '+c.ref+' '+str([str(f) for f in c.fields]))
|
||||
return splitted
|
||||
return split
|
||||
|
||||
def filter(self, comp):
|
||||
""" Look for fields containing `part1; mult:part2; etc.` """
|
||||
# Analyze how to split this component
|
||||
max_num_subparts = 0
|
||||
splitted_fields = {}
|
||||
split_fields = {}
|
||||
field_max = None
|
||||
for field in self._fields:
|
||||
value = comp.get_field_value(field)
|
||||
|
|
@ -227,16 +227,16 @@ class Subparts(BaseFilter): # noqa: F821
|
|||
# Skip it if not used
|
||||
continue
|
||||
subparts = self.subpart_list(value)
|
||||
splitted_fields[field] = subparts
|
||||
split_fields[field] = subparts
|
||||
num_subparts = len(subparts)
|
||||
if num_subparts > max_num_subparts:
|
||||
field_max = field
|
||||
max_num_subparts = num_subparts
|
||||
# Print a warning if this field has a different ammount
|
||||
# Print a warning if this field has a different amount
|
||||
if num_subparts != max_num_subparts:
|
||||
logger.warning(W_NUMSUBPARTS+'Different subparts ammount on {r}, field {c} has {cn} and {lc} has {lcn}.'
|
||||
logger.warning(W_NUMSUBPARTS+'Different subparts amount on {r}, field {c} has {cn} and {lc} has {lcn}.'
|
||||
.format(r=comp.ref, c=field_max, cn=max_num_subparts, lc=field, lcn=num_subparts))
|
||||
if len(splitted_fields) == 0:
|
||||
if len(split_fields) == 0:
|
||||
# Nothing to split
|
||||
return
|
||||
# Split the manufacturer name
|
||||
|
|
@ -253,6 +253,6 @@ class Subparts(BaseFilter): # noqa: F821
|
|||
for i in range(len(manfs)-1):
|
||||
if manfs[i+1] == '~':
|
||||
manfs[i+1] = manfs[i]
|
||||
splitted_fields[self.manf_field] = manfs
|
||||
split_fields[self.manf_field] = manfs
|
||||
# Now do the work
|
||||
return self.do_split(comp, max_num_subparts, splitted_fields)
|
||||
return self.do_split(comp, max_num_subparts, split_fields)
|
||||
|
|
|
|||
|
|
@ -26,7 +26,7 @@ class Var_Rename(BaseFilter): # noqa: F821
|
|||
self.variant_to_value = False
|
||||
""" Rename fields matching the variant to the value of the component """
|
||||
self.force_variant = ''
|
||||
""" Use this variant instead of the current variant. Usefull for IBoM variants """
|
||||
""" Use this variant instead of the current variant. Useful for IBoM variants """
|
||||
|
||||
def config(self, parent):
|
||||
super().config(parent)
|
||||
|
|
|
|||
|
|
@ -7,7 +7,7 @@ import os
|
|||
try:
|
||||
import pcbnew
|
||||
except ImportError:
|
||||
# This is catched by __main__, ignore the error here
|
||||
# This is caught by __main__, ignore the error here
|
||||
class pcbnew(object):
|
||||
pass
|
||||
from datetime import datetime, date
|
||||
|
|
|
|||
|
|
@ -526,7 +526,7 @@ class Bracket(SExpBase):
|
|||
|
||||
def tosexp(self, tosexp=tosexp):
|
||||
bra = self._bra
|
||||
ket = BRACKETS[self._bra]
|
||||
ke = BRACKETS[self._bra]
|
||||
c = ''
|
||||
for i, v in enumerate(self._val):
|
||||
v = tosexp(v)
|
||||
|
|
@ -537,7 +537,7 @@ class Bracket(SExpBase):
|
|||
# Avoid spaces at the end of lines
|
||||
c = c.rstrip(' ')
|
||||
c += v
|
||||
return uformat("{0}{1}{2}", bra, c, ket)
|
||||
return uformat("{0}{1}{2}", bra, c, ke)
|
||||
|
||||
|
||||
def bracket(val, bra):
|
||||
|
|
|
|||
|
|
@ -197,7 +197,7 @@ class DrawPoligon(object):
|
|||
pol_re = re.compile(r'P\s+(\d+)\s+' # 0 Number of points
|
||||
r'(\d+)\s+' # 1 Sub-part (0 == all)
|
||||
r'([012])\s+' # 2 Which representation (0 == both) for DeMorgan
|
||||
r'(-?\d+)\s+' # 3 Thickness (Components from 74xx.lib has poligons with -1000)
|
||||
r'(-?\d+)\s+' # 3 Thickness (Components from 74xx.lib has polygons with -1000)
|
||||
r'((?:-?\d+\s+)+)' # 4 The points
|
||||
r'([NFf])') # 5 Normal, Filled
|
||||
|
||||
|
|
@ -208,7 +208,7 @@ class DrawPoligon(object):
|
|||
def parse(line):
|
||||
m = DrawPoligon.pol_re.match(line)
|
||||
if not m:
|
||||
logger.warning(W_BADPOLI + 'Unknown poligon definition `{}`'.format(line))
|
||||
logger.warning(W_BADPOLI + 'Unknown polygon definition `{}`'.format(line))
|
||||
return None
|
||||
pol = DrawPoligon()
|
||||
g = m.groups()
|
||||
|
|
@ -219,7 +219,7 @@ class DrawPoligon(object):
|
|||
pol.fill = g[5]
|
||||
coords = _split_space(g[4])
|
||||
if len(coords) != 2*pol.points:
|
||||
logger.warning(W_POLICOORDS + 'Expected {} coordinates and got {} in poligon'.format(2*pol.points, len(coords)))
|
||||
logger.warning(W_POLICOORDS + 'Expected {} coordinates and got {} in polygon'.format(2*pol.points, len(coords)))
|
||||
pol.points = int(len(coords)/2)
|
||||
pol.coords = [int(c) for c in coords]
|
||||
return pol
|
||||
|
|
@ -873,7 +873,7 @@ class SchematicComponent(object):
|
|||
- footprint_y: y position of the part in the pick & place.
|
||||
- footprint_w: width of the footprint (pads only).
|
||||
- footprint_h: height of the footprint (pads only)
|
||||
- qty: ammount of this part used.
|
||||
- qty: amount of this part used.
|
||||
"""
|
||||
ref_re = re.compile(r'([^\d]+)([\?\d]+)')
|
||||
|
||||
|
|
@ -969,7 +969,7 @@ class SchematicComponent(object):
|
|||
self.dfields_bkp = {f.name.lower(): f for f in self.fields_bkp}
|
||||
|
||||
def _solve_ref(self, path):
|
||||
""" Look fo the correct reference for this path.
|
||||
""" Look for the correct reference for this path.
|
||||
Returns the default reference if no paths defined.
|
||||
Returns the first not empty reference if the current is empty. """
|
||||
ref = self.f_ref
|
||||
|
|
@ -1764,7 +1764,7 @@ class Schematic(object):
|
|||
self.sub_sheets[c].save(file, dest_dir)
|
||||
|
||||
def save_variant(self, dest_dir):
|
||||
# Currently imposible
|
||||
# Currently impossible
|
||||
# if not os.path.exists(dest_dir):
|
||||
# os.makedirs(dest_dir)
|
||||
lib_yes = os.path.join(dest_dir, 'y.lib')
|
||||
|
|
|
|||
|
|
@ -66,7 +66,7 @@ def _load_actions(path, load_internals=False):
|
|||
|
||||
|
||||
def load_actions():
|
||||
""" Load all the available ouputs and preflights """
|
||||
""" Load all the available outputs and preflights """
|
||||
global actions_loaded
|
||||
if actions_loaded:
|
||||
return
|
||||
|
|
@ -378,7 +378,7 @@ def run_output(out):
|
|||
def generate_outputs(outputs, target, invert, skip_pre, cli_order):
|
||||
logger.debug("Starting outputs for board {}".format(GS.pcb_file))
|
||||
preflight_checks(skip_pre)
|
||||
# Chek if the preflights pulled options
|
||||
# Check if the preflights pulled options
|
||||
for out in RegOutput.get_prioritary_outputs():
|
||||
config_output(out)
|
||||
logger.info('- '+str(out))
|
||||
|
|
@ -448,9 +448,9 @@ def gen_global_targets(f, pre_targets, out_targets, type):
|
|||
|
||||
def get_pre_targets(targets, dependencies, is_pre):
|
||||
pcb_targets = sch_targets = ''
|
||||
pres = BasePreFlight.get_in_use_objs()
|
||||
prefs = BasePreFlight.get_in_use_objs()
|
||||
try:
|
||||
for pre in pres:
|
||||
for pre in prefs:
|
||||
tg = pre.get_targets()
|
||||
if not tg:
|
||||
continue
|
||||
|
|
|
|||
|
|
@ -7,7 +7,7 @@
|
|||
"""
|
||||
Log module
|
||||
|
||||
Handles logging initialization and formating.
|
||||
Handles logging initialization and formatting.
|
||||
"""
|
||||
import sys
|
||||
import logging
|
||||
|
|
|
|||
|
|
@ -246,7 +246,7 @@ class BaseMacroExpander(NodeTransformer):
|
|||
# Resolve macro binding.
|
||||
macro = self.isbound(macroname)
|
||||
if not macro: # pragma: no cover
|
||||
raise MacroApplicationError(f"{loc}\nin {syntax} macro invocation for '{macroname}': the name '{macroname}' is not bound to a macro.")
|
||||
raise MacroApplicationError(f"{loc}\n""in {syntax} macro invocation for '{macroname}': the name '{macroname}' is not bound to a macro.")
|
||||
|
||||
# Expand the macro.
|
||||
expansion = self._apply_macro(macro, tree, kw, macroname, target)
|
||||
|
|
@ -272,7 +272,7 @@ class BaseMacroExpander(NodeTransformer):
|
|||
|
||||
# If something went wrong, generate a standardized macro use site report.
|
||||
except Exception as err:
|
||||
msg = f"{loc}\nin {syntax} macro invocation for '{macroname}'"
|
||||
msg = f"{loc}\n""in {syntax} macro invocation for '{macroname}'"
|
||||
if isinstance(err, MacroApplicationError) and err.__cause__:
|
||||
# Telescope nested use site reports, by keeping the original
|
||||
# traceback and `__cause__`, but combining the messages.
|
||||
|
|
|
|||
|
|
@ -263,7 +263,7 @@ def name2make(name):
|
|||
|
||||
@contextmanager
|
||||
def hide_stderr():
|
||||
""" Low level stderr supression, used to hide KiCad bugs. """
|
||||
""" Low level stderr suppression, used to hide KiCad bugs. """
|
||||
newstderr = os.dup(2)
|
||||
devnull = os.open('/dev/null', os.O_WRONLY)
|
||||
os.dup2(devnull, 2)
|
||||
|
|
|
|||
|
|
@ -71,7 +71,7 @@ class BaseOutput(RegOutput):
|
|||
def get_targets(self, out_dir):
|
||||
""" Returns a list of targets generated by this output """
|
||||
if not (hasattr(self, "options") and hasattr(self.options, "get_targets")):
|
||||
logger.error("Output {} doesn't implement get_targets(), plese report it".format(self))
|
||||
logger.error("Output {} doesn't implement get_targets(), please report it".format(self))
|
||||
return []
|
||||
return self.options.get_targets(out_dir)
|
||||
|
||||
|
|
|
|||
|
|
@ -281,7 +281,7 @@ class Aggregate(Optionable):
|
|||
self.ref_id = ''
|
||||
""" A prefix to add to all the references from this project """
|
||||
self.number = 1
|
||||
""" Number of boards to build (components multiplier). Use negative to substract """
|
||||
""" Number of boards to build (components multiplier). Use negative to subtract """
|
||||
|
||||
def config(self, parent):
|
||||
super().config(parent)
|
||||
|
|
|
|||
|
|
@ -20,7 +20,7 @@ class GerberOptions(AnyLayerOptions):
|
|||
self.line_width = 0.1
|
||||
""" [0.02,2] Line_width for objects without width [mm] (KiCad 5) """
|
||||
self.subtract_mask_from_silk = False
|
||||
""" Substract the solder mask from the silk screen """
|
||||
""" Subtract the solder mask from the silk screen """
|
||||
self.use_protel_extensions = False
|
||||
""" Use legacy Protel file extensions """
|
||||
self._gerber_precision = 4.6
|
||||
|
|
|
|||
|
|
@ -0,0 +1,111 @@
|
|||
# -*- coding: utf-8 -*-
|
||||
# Copyright (c) 2020 Salvador E. Tropea
|
||||
# Copyright (c) 2020 Instituto Nacional de Tecnología Industrial
|
||||
# Copyright (c) 2018 John Beard
|
||||
# License: GPL-3.0
|
||||
# Project: KiBot (formerly KiPlot)
|
||||
# Adapted from: https://github.com/johnbeard/kiplot
|
||||
from pcbnew import (PLOT_FORMAT_GERBER, FromMM, ToMM, DIM_UNITS_MODE_MILLIMETRES, DIM_UNITS_MODE_INCHES, DIM_UNITS_MODE_AUTOMATIC)
|
||||
from .gs import GS
|
||||
from .out_any_layer import (AnyLayer, AnyLayerOptions)
|
||||
from .error import KiPlotConfigurationError
|
||||
from .macros import macros, document, output_class # noqa: F401
|
||||
from . import log
|
||||
|
||||
logger = log.get_logger()
|
||||
|
||||
|
||||
class GerberOptions(AnyLayerOptions):
|
||||
def __init__(self):
|
||||
with document:
|
||||
self.use_aux_axis_as_origin = False
|
||||
""" Use the auxiliary axis as origin for coordinates """
|
||||
self.line_width = 0.1
|
||||
""" [0.02,2] Line_width for objects without width [mm] (KiCad 5) """
|
||||
self.subtract_mask_from_silk = False
|
||||
""" Substract the solder mask from the silk screen """
|
||||
self.use_protel_extensions = False
|
||||
""" Use legacy Protel file extensions """
|
||||
self._gerber_precision = 4.6
|
||||
""" This the gerber coordinate format, can be 4.5 or 4.6 """
|
||||
self.create_gerber_job_file = True
|
||||
""" Creates a file with information about all the generated gerbers.
|
||||
You can use it in gerbview to load all gerbers at once """
|
||||
self.gerber_job_file = GS.def_global_output
|
||||
""" Name for the gerber job file (%i='job', %x='gbrjob') """
|
||||
self.use_gerber_x2_attributes = True
|
||||
""" Use the extended X2 format (otherwise use X1 formerly RS-274X) """
|
||||
self.use_gerber_net_attributes = True
|
||||
""" Include netlist metadata """
|
||||
self.disable_aperture_macros = False
|
||||
""" Disable aperture macros (workaround for buggy CAM software) (KiCad 6) """
|
||||
super().__init__()
|
||||
self._plot_format = PLOT_FORMAT_GERBER
|
||||
if GS.global_output is not None:
|
||||
self.gerber_job_file = GS.global_output
|
||||
|
||||
@property
|
||||
def gerber_precision(self):
|
||||
return self._gerber_precision
|
||||
|
||||
@gerber_precision.setter
|
||||
def gerber_precision(self, val):
|
||||
if val != 4.5 and val != 4.6:
|
||||
raise KiPlotConfigurationError("`gerber_precision` must be 4.5 or 4.6")
|
||||
self._gerber_precision = val
|
||||
|
||||
def _configure_plot_ctrl(self, po, output_dir):
|
||||
super()._configure_plot_ctrl(po, output_dir)
|
||||
po.SetSubtractMaskFromSilk(self.subtract_mask_from_silk)
|
||||
po.SetUseGerberProtelExtensions(self.use_protel_extensions)
|
||||
po.SetGerberPrecision(5 if self.gerber_precision == 4.5 else 6)
|
||||
po.SetCreateGerberJobFile(self.create_gerber_job_file)
|
||||
po.SetUseGerberX2format(self.use_gerber_x2_attributes)
|
||||
po.SetIncludeGerberNetlistInfo(self.use_gerber_net_attributes)
|
||||
po.SetUseAuxOrigin(self.use_aux_axis_as_origin)
|
||||
po.SetDrillMarksType(0)
|
||||
if GS.ki5():
|
||||
po.SetLineWidth(FromMM(self.line_width))
|
||||
else:
|
||||
po.SetDisableGerberMacros(self.disable_aperture_macros) # pragma: no cover (Ki6)
|
||||
ds = GS.board.GetDesignSettings()
|
||||
logger.error(ds.m_DimensionUnitsMode)
|
||||
ds.m_DimensionUnitsMode = DIM_UNITS_MODE_MILLIMETRES
|
||||
logger.error(ds.m_DimensionUnitsMode)
|
||||
logger.error(DIM_UNITS_MODE_AUTOMATIC)
|
||||
po.gerber_job_file = self.gerber_job_file
|
||||
|
||||
def read_vals_from_po(self, po):
|
||||
super().read_vals_from_po(po)
|
||||
# usegerberattributes
|
||||
self.use_gerber_x2_attributes = po.GetUseGerberX2format()
|
||||
# usegerberextensions
|
||||
self.use_protel_extensions = po.GetUseGerberProtelExtensions()
|
||||
# usegerberadvancedattributes
|
||||
self.use_gerber_net_attributes = po.GetIncludeGerberNetlistInfo()
|
||||
# creategerberjobfile
|
||||
self.create_gerber_job_file = po.GetCreateGerberJobFile()
|
||||
# gerberprecision
|
||||
self.gerber_precision = 4.0 + po.GetGerberPrecision()/10.0
|
||||
# subtractmaskfromsilk
|
||||
self.subtract_mask_from_silk = po.GetSubtractMaskFromSilk()
|
||||
# useauxorigin
|
||||
self.use_aux_axis_as_origin = po.GetUseAuxOrigin()
|
||||
if GS.ki5():
|
||||
# linewidth
|
||||
self.line_width = ToMM(po.GetLineWidth())
|
||||
else:
|
||||
# disableapertmacros
|
||||
self.disable_aperture_macros = po.GetDisableGerberMacros() # pragma: no cover (Ki6)
|
||||
|
||||
|
||||
@output_class
|
||||
class Gerber(AnyLayer):
|
||||
""" Gerber format
|
||||
This is the main fabrication format for the PCB.
|
||||
This output is what you get from the File/Plot menu in pcbnew. """
|
||||
def __init__(self):
|
||||
super().__init__()
|
||||
with document:
|
||||
self.options = GerberOptions
|
||||
""" [dict] Options for the `gerber` output """
|
||||
|
|
@ -45,7 +45,7 @@ class PDFOptions(DrillMarks):
|
|||
@output_class
|
||||
class PDF(AnyLayer, DrillMarks):
|
||||
""" PDF (Portable Document Format)
|
||||
Exports the PCB to the most common exhange format. Suitable for printing.
|
||||
Exports the PCB to the most common exchange format. Suitable for printing.
|
||||
Note that this output isn't the best for documating your project.
|
||||
This output is what you get from the File/Plot menu in pcbnew. """
|
||||
def __init__(self):
|
||||
|
|
|
|||
|
|
@ -149,7 +149,7 @@ class PDF_Pcb_PrintOptions(VariantOptions):
|
|||
@output_class
|
||||
class PDF_Pcb_Print(BaseOutput): # noqa: F821
|
||||
""" PDF PCB Print (Portable Document Format)
|
||||
Exports the PCB to the most common exhange format. Suitable for printing.
|
||||
Exports the PCB to the most common exchange format. Suitable for printing.
|
||||
This is the main format to document your PCB.
|
||||
This output is what you get from the 'File/Print' menu in pcbnew. """
|
||||
def __init__(self):
|
||||
|
|
|
|||
|
|
@ -89,7 +89,7 @@ class PDF_Sch_PrintOptions(VariantOptions):
|
|||
@output_class
|
||||
class PDF_Sch_Print(BaseOutput): # noqa: F821
|
||||
""" PDF Schematic Print (Portable Document Format)
|
||||
Exports the PCB to the most common exhange format. Suitable for printing.
|
||||
Exports the PCB to the most common exchange format. Suitable for printing.
|
||||
This is the main format to document your schematic.
|
||||
This output is what you get from the 'File/Print' menu in eeschema. """
|
||||
def __init__(self):
|
||||
|
|
|
|||
|
|
@ -40,7 +40,7 @@ class PosColumns(Optionable):
|
|||
self.id = ''
|
||||
""" [Ref,Val,Package,PosX,PosY,Rot,Side] Internal name """
|
||||
self.name = ''
|
||||
""" Name to use in the outut file. The id is used when empty """
|
||||
""" Name to use in the output file. The id is used when empty """
|
||||
self._id_example = 'Ref'
|
||||
self._name_example = 'Reference'
|
||||
|
||||
|
|
@ -129,17 +129,17 @@ class PositionOptions(VariantOptions):
|
|||
maxSizes[0] = maxSizes[0] + 2
|
||||
|
||||
for m in modulesStr:
|
||||
fle = bothf
|
||||
if fle is None:
|
||||
file = bothf
|
||||
if file is None:
|
||||
if m[-1] == "top":
|
||||
fle = topf
|
||||
file = topf
|
||||
else:
|
||||
fle = botf
|
||||
file = botf
|
||||
for idx, col in enumerate(m):
|
||||
if idx > 0:
|
||||
fle.write(" ")
|
||||
fle.write("{0: <{width}}".format(col, width=maxSizes[idx]))
|
||||
fle.write("\n")
|
||||
file.write(" ")
|
||||
file.write("{0: <{width}}".format(col, width=maxSizes[idx]))
|
||||
file.write("\n")
|
||||
|
||||
for f in files:
|
||||
f.write("## End\n")
|
||||
|
|
@ -168,14 +168,14 @@ class PositionOptions(VariantOptions):
|
|||
f.write("\n")
|
||||
|
||||
for m in modulesStr:
|
||||
fle = bothf
|
||||
if fle is None:
|
||||
file = bothf
|
||||
if file is None:
|
||||
if m[-1] == "top":
|
||||
fle = topf
|
||||
file = topf
|
||||
else:
|
||||
fle = botf
|
||||
fle.write(",".join('{}'.format(e) for e in m))
|
||||
fle.write("\n")
|
||||
file = botf
|
||||
file.write(",".join('{}'.format(e) for e in m))
|
||||
file.write("\n")
|
||||
|
||||
if topf is not None:
|
||||
topf.close()
|
||||
|
|
@ -227,7 +227,7 @@ class PositionOptions(VariantOptions):
|
|||
y_origin = 0.0
|
||||
if self.use_aux_axis_as_origin:
|
||||
(x_origin, y_origin) = GS.get_aux_origin()
|
||||
logger.debug('Using auxiliar origin: x={} y={}'.format(x_origin, y_origin))
|
||||
logger.debug('Using auxiliary origin: x={} y={}'.format(x_origin, y_origin))
|
||||
for m in sorted(GS.get_modules(), key=lambda c: _ref_key(c.GetReference())):
|
||||
ref = m.GetReference()
|
||||
logger.debug('P&P ref: {}'.format(ref))
|
||||
|
|
@ -294,7 +294,7 @@ class PositionOptions(VariantOptions):
|
|||
class Position(BaseOutput): # noqa: F821
|
||||
""" Pick & place
|
||||
Generates the file with position information for the PCB components, used by the pick and place machine.
|
||||
This output is what you get from the 'File/Fabrication output/Footprint poistion (.pos) file' menu in pcbnew. """
|
||||
This output is what you get from the 'File/Fabrication output/Footprint position (.pos) file' menu in pcbnew. """
|
||||
def __init__(self):
|
||||
super().__init__()
|
||||
with document:
|
||||
|
|
|
|||
|
|
@ -44,7 +44,7 @@ class RegOutput(Optionable, Registrable):
|
|||
_def_variants = {}
|
||||
# List of defined outputs
|
||||
_def_outputs = OrderedDict()
|
||||
# List of prioritary outputs
|
||||
# List of priority outputs
|
||||
_prio_outputs = OrderedDict()
|
||||
|
||||
def __init__(self):
|
||||
|
|
|
|||
|
|
@ -40,7 +40,7 @@ class BaseVariant(RegVariant):
|
|||
Use '_kibom_dnc' for the default KiBoM behavior """
|
||||
|
||||
def get_variant_field(self):
|
||||
''' Returns the name of the field used to determine if the component belongs to teh variant '''
|
||||
''' Returns the name of the field used to determine if the component belongs to the variant '''
|
||||
return None
|
||||
|
||||
def filter(self, comps):
|
||||
|
|
|
|||
|
|
@ -33,7 +33,7 @@ class IBoM(BaseVariant): # noqa: F821
|
|||
""" [string|list(string)=''] List of board variants to include in the BOM """
|
||||
|
||||
def get_variant_field(self):
|
||||
''' Returns the name of the field used to determine if the component belongs to teh variant '''
|
||||
''' Returns the name of the field used to determine if the component belongs to the variant '''
|
||||
return self.variant_field
|
||||
|
||||
def config(self, parent):
|
||||
|
|
|
|||
|
|
@ -29,7 +29,7 @@ class KiBoM(BaseVariant): # noqa: F821
|
|||
self._def_dnc_filter = None
|
||||
with document:
|
||||
self.config_field = 'Config'
|
||||
""" Name of the field used to clasify components """
|
||||
""" Name of the field used to classify components """
|
||||
self.variant = Optionable
|
||||
""" [string|list(string)=''] Board variant(s) """
|
||||
|
||||
|
|
|
|||
|
|
@ -39,7 +39,7 @@ class KiCost(BaseVariant): # noqa: F821
|
|||
Only supported internally, don't use it if you plan to use KiCost """
|
||||
|
||||
def get_variant_field(self):
|
||||
''' Returns the name of the field used to determine if the component belongs to teh variant '''
|
||||
''' Returns the name of the field used to determine if the component belongs to the variant '''
|
||||
return self.variant_field
|
||||
|
||||
def config(self, parent):
|
||||
|
|
|
|||
|
|
@ -1450,7 +1450,7 @@ def test_int_bom_variant_t3(test_dir):
|
|||
|
||||
|
||||
def test_int_bom_variant_cli(test_dir):
|
||||
""" Assing t1_v1 to default from cli. Make sure t1_v3 isn't affected """
|
||||
""" Assign t1_v1 to default from cli. Make sure t1_v3 isn't affected """
|
||||
prj = 'kibom-variante'
|
||||
ctx = context.TestContextSCH(test_dir, 'test_int_bom_variant_cli', prj, 'int_bom_var_t1_cli', BOM_DIR)
|
||||
ctx.run(extra=['--global-redef', 'variant=t1_v1'])
|
||||
|
|
@ -1471,7 +1471,7 @@ def test_int_bom_variant_cli(test_dir):
|
|||
|
||||
|
||||
def test_int_bom_variant_glb(test_dir):
|
||||
""" Assing t1_v1 to default from global. Make sure t1_v3 isn't affected """
|
||||
""" Assign t1_v1 to default from global. Make sure t1_v3 isn't affected """
|
||||
prj = 'kibom-variante'
|
||||
ctx = context.TestContextSCH(test_dir, 'test_int_bom_variant_glb', prj, 'int_bom_var_t1_glb', BOM_DIR)
|
||||
ctx.run()
|
||||
|
|
@ -1491,7 +1491,7 @@ def test_int_bom_variant_glb(test_dir):
|
|||
|
||||
|
||||
def test_int_bom_variant_cl_gl(test_dir):
|
||||
""" Assing t1_v1 to default from global.
|
||||
""" Assign t1_v1 to default from global.
|
||||
Overwrite it from cli to t1_v2.
|
||||
Make sure t1_v3 isn't affected """
|
||||
prj = 'kibom-variante'
|
||||
|
|
|
|||
|
|
@ -155,7 +155,7 @@ def test_no_get_targets(caplog):
|
|||
test.get_targets('')
|
||||
files = test.get_dependencies()
|
||||
files_pre = test_pre.get_dependencies()
|
||||
assert "Output 'Fake' (dummy) [none] doesn't implement get_targets(), plese report it" in caplog.text
|
||||
assert "Output 'Fake' (dummy) [none] doesn't implement get_targets(), please report it" in caplog.text
|
||||
assert files == [GS.sch_file]
|
||||
assert files_pre == [GS.sch_file]
|
||||
|
||||
|
|
|
|||
|
|
@ -178,7 +178,7 @@ def test_sch_missing_filtered(test_dir):
|
|||
|
||||
|
||||
def test_sch_bizarre_cases(test_dir):
|
||||
""" Poligon without points.
|
||||
""" Polygon without points.
|
||||
Pin with unknown direction. """
|
||||
if not context.ki5():
|
||||
# This is very KiCad 5 loader specific
|
||||
|
|
|
|||
|
|
@ -77,7 +77,7 @@ def test_sch_errors_l3(test_dir):
|
|||
def test_sch_errors_l5(test_dir):
|
||||
if context.ki6():
|
||||
return
|
||||
setup_ctx(test_dir, 'l5', ['Unknown poligon definition', 'Expected 6 coordinates and got 8 in poligon',
|
||||
setup_ctx(test_dir, 'l5', ['Unknown polygon definition', 'Expected 6 coordinates and got 8 in polygon',
|
||||
'Unknown square definition', 'Unknown circle definition', 'Unknown arc definition',
|
||||
'Unknown text definition', 'Unknown pin definition', 'Failed to load component definition',
|
||||
'Unknown draw element'])
|
||||
|
|
|
|||
|
|
@ -340,17 +340,17 @@ class TestContext(object):
|
|||
server = None
|
||||
else:
|
||||
os.environ['KICOST_KITSPACE_URL'] = 'http://localhost:8000'
|
||||
fo = open(self.get_out_path('server_stdout.txt'), 'at')
|
||||
fe = open(self.get_out_path('server_stderr.txt'), 'at')
|
||||
server = subprocess.Popen('./tests/utils/dummy-web-server.py', stdout=fo, stderr=fe)
|
||||
f_o = open(self.get_out_path('server_stdout.txt'), 'at')
|
||||
f_e = open(self.get_out_path('server_stderr.txt'), 'at')
|
||||
server = subprocess.Popen('./tests/utils/dummy-web-server.py', stdout=f_o, stderr=f_e)
|
||||
try:
|
||||
self.do_run(cmd, ret_val, use_a_tty, chdir_out)
|
||||
finally:
|
||||
# Always kill the fake web server
|
||||
if kicost and server is not None:
|
||||
server.terminate()
|
||||
fo.close()
|
||||
fe.close()
|
||||
f_o.close()
|
||||
f_e.close()
|
||||
# Do we need to restore the locale?
|
||||
if do_locale:
|
||||
if old_LOCPATH:
|
||||
|
|
@ -475,7 +475,7 @@ class TestContext(object):
|
|||
self.get_out_path(gen),
|
||||
self.get_out_path('gen-%d.png')]
|
||||
subprocess.check_call(cmd)
|
||||
# Chek number of pages
|
||||
# Check number of pages
|
||||
ref_pages = glob(self.get_out_path('ref-*.png'))
|
||||
gen_pages = glob(self.get_out_path('gen-*.png'))
|
||||
logging.debug('Pages {} vs {}'.format(len(gen_pages), len(ref_pages)))
|
||||
|
|
|
|||
|
|
@ -19,7 +19,7 @@ variants:
|
|||
|
||||
outputs:
|
||||
- name: 'bom_internal_subparts'
|
||||
comment: "Bill of Materials in CSV format, subparts splitted"
|
||||
comment: "Bill of Materials in CSV format, subparts split"
|
||||
type: bom
|
||||
dir: .
|
||||
options: &bom_options
|
||||
|
|
|
|||
|
|
@ -97,7 +97,7 @@ variants:
|
|||
|
||||
outputs:
|
||||
- name: 'bom_internal_subparts'
|
||||
comment: "Bill of Materials in CSV format, subparts splitted"
|
||||
comment: "Bill of Materials in CSV format, subparts split"
|
||||
type: bom
|
||||
dir: .
|
||||
options: &bom_options
|
||||
|
|
|
|||
|
|
@ -22,7 +22,7 @@ variants:
|
|||
|
||||
outputs:
|
||||
- name: 'bom_internal_subparts'
|
||||
comment: "Bill of Materials in CSV format, subparts splitted"
|
||||
comment: "Bill of Materials in CSV format, subparts split"
|
||||
type: bom
|
||||
dir: .
|
||||
options: &bom_options
|
||||
|
|
|
|||
Loading…
Reference in New Issue