Fixed spelling
This commit is contained in:
parent
751f093de9
commit
6281a40335
12
CHANGELOG.md
12
CHANGELOG.md
|
|
@ -82,7 +82,7 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
|
||||||
- The default output pattern now includes the `output_id` (%I)
|
- The default output pattern now includes the `output_id` (%I)
|
||||||
|
|
||||||
### Fixed
|
### Fixed
|
||||||
- Position files now defaults to use the auxiliar origin as KiCad.
|
- Position files now defaults to use the auxiliary origin as KiCad.
|
||||||
Can be disabled to use absolute coordinates. (#87)
|
Can be disabled to use absolute coordinates. (#87)
|
||||||
- Board View: flipped output. (#89)
|
- Board View: flipped output. (#89)
|
||||||
- Board View: problems with netnames using spaces. (#90)
|
- Board View: problems with netnames using spaces. (#90)
|
||||||
|
|
@ -130,7 +130,7 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
|
||||||
|
|
||||||
### Fixed
|
### Fixed
|
||||||
- Problem when using E/DRC filters and the output dir didn't exist.
|
- Problem when using E/DRC filters and the output dir didn't exist.
|
||||||
- Not all errors during makefile generation were catched (got a stack trace).
|
- Not all errors during makefile generation were caught (got a stack trace).
|
||||||
- Output dirs created when generating a makefile for a compress target.
|
- Output dirs created when generating a makefile for a compress target.
|
||||||
- Problems with some SnapEDA libs (extra space in lib termination tag #57)
|
- Problems with some SnapEDA libs (extra space in lib termination tag #57)
|
||||||
- The "References" (plural) column is now coloured as "Reference" (singular)
|
- The "References" (plural) column is now coloured as "Reference" (singular)
|
||||||
|
|
@ -268,8 +268,8 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
|
||||||
|
|
||||||
## [0.6.2] - 2020-08-25
|
## [0.6.2] - 2020-08-25
|
||||||
### Changed
|
### Changed
|
||||||
- Discarded spaces at the beggining and end of user fields when creating the
|
- Discarded spaces at the beginning and end of user fields when creating the
|
||||||
internal BoM. They are ususally mistakes that prevents grouping components.
|
internal BoM. They are usually mistakes that prevents grouping components.
|
||||||
|
|
||||||
### Fixed
|
### Fixed
|
||||||
- The variants logic for BoMs when a component resquested to be only added to
|
- The variants logic for BoMs when a component resquested to be only added to
|
||||||
|
|
@ -277,7 +277,7 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
|
||||||
- Removed warnings about malformed values for DNF components indicating it in
|
- Removed warnings about malformed values for DNF components indicating it in
|
||||||
its value.
|
its value.
|
||||||
- Problems with PcbDraw when generating PNG and JPG outputs. Now we use a more
|
- Problems with PcbDraw when generating PNG and JPG outputs. Now we use a more
|
||||||
reliable conversion methode when available.
|
reliable conversion method when available.
|
||||||
|
|
||||||
## [0.6.1] - 2020-08-20
|
## [0.6.1] - 2020-08-20
|
||||||
### Added
|
### Added
|
||||||
|
|
@ -407,7 +407,7 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
|
||||||
|
|
||||||
### Fixed
|
### Fixed
|
||||||
- All pcbnew plot formats generated gerber job files
|
- All pcbnew plot formats generated gerber job files
|
||||||
- Most formats that needed layers didn't complain when ommited
|
- Most formats that needed layers didn't complain when omitted
|
||||||
|
|
||||||
## [0.2.4] - 2020-05-19
|
## [0.2.4] - 2020-05-19
|
||||||
### Changed
|
### Changed
|
||||||
|
|
|
||||||
34
README.md
34
README.md
|
|
@ -8,7 +8,7 @@
|
||||||
|
|
||||||
**Important for KiCad 6 users**:
|
**Important for KiCad 6 users**:
|
||||||
- Only the code in the git repo supports KiCad 6 (no stable release yet)
|
- Only the code in the git repo supports KiCad 6 (no stable release yet)
|
||||||
- The docker images taget `ki6` has KiCad 6, but you need to use the KiBot from the repo, not the one in the images.
|
- The docker images target `ki6` has KiCad 6, but you need to use the KiBot from the repo, not the one in the images.
|
||||||
- The docker image with KiCad 6 and KiBot that supports it is tagged as `dev_k6`
|
- The docker image with KiCad 6 and KiBot that supports it is tagged as `dev_k6`
|
||||||
- The GitHub action with KiCad 6 support is tagged as `v1_k6`
|
- The GitHub action with KiCad 6 support is tagged as `v1_k6`
|
||||||
- When using KiCad 6 you must migrate the whole project and pass the migrated files to KiBot.
|
- When using KiCad 6 you must migrate the whole project and pass the migrated files to KiBot.
|
||||||
|
|
@ -319,7 +319,7 @@ This selection isn't stored in the PCB file. The global `units` value is used by
|
||||||
|
|
||||||
#### Output directory option
|
#### Output directory option
|
||||||
|
|
||||||
The `out_dir` option can define the base outut directory. This is the same as the `-d`/`--out-dir` command line option.
|
The `out_dir` option can define the base output directory. This is the same as the `-d`/`--out-dir` command line option.
|
||||||
Note that the command line option has precedence over it.
|
Note that the command line option has precedence over it.
|
||||||
|
|
||||||
Expansion patterns are applied to this value, but you should avoid using patterns that expand according to the context, i.e. **%c**, **%d**, **%f**, **%F**, **%p** and **%r**.
|
Expansion patterns are applied to this value, but you should avoid using patterns that expand according to the context, i.e. **%c**, **%d**, **%f**, **%F**, **%p** and **%r**.
|
||||||
|
|
@ -379,7 +379,7 @@ Both concepts are closely related. In fact variants can use filters.
|
||||||
The current implementation of the filters allow to exclude components from some of the processing stages. The most common use is to exclude them from some output.
|
The current implementation of the filters allow to exclude components from some of the processing stages. The most common use is to exclude them from some output.
|
||||||
In the future more advanced filters will allow modification of component details.
|
In the future more advanced filters will allow modification of component details.
|
||||||
|
|
||||||
Variants are currently used to create *assembly variants*. This concept is used to manufature one PCB used for various products.
|
Variants are currently used to create *assembly variants*. This concept is used to manufacture one PCB used for various products.
|
||||||
You can learn more about KiBot variants on the following [example repo](https://inti-cmnb.github.io/kibot_variants_arduprog/).
|
You can learn more about KiBot variants on the following [example repo](https://inti-cmnb.github.io/kibot_variants_arduprog/).
|
||||||
|
|
||||||
As mentioned above the current use of filters is to mark some components. Mainly to exclude them, but also to mark them as special.
|
As mentioned above the current use of filters is to mark some components. Mainly to exclude them, but also to mark them as special.
|
||||||
|
|
@ -412,12 +412,12 @@ Currently the only type available is `generic`.
|
||||||
- generic: Generic filter
|
- generic: Generic filter
|
||||||
This filter is based on regular expressions.
|
This filter is based on regular expressions.
|
||||||
It also provides some shortcuts for common situations.
|
It also provides some shortcuts for common situations.
|
||||||
Note that matches aren't case sensitive and spaces at the beggining and the end are removed.
|
Note that matches aren't case sensitive and spaces at the beginning and the end are removed.
|
||||||
The internal `_mechanical` filter emulates the KiBoM behavior for default exclusions.
|
The internal `_mechanical` filter emulates the KiBoM behavior for default exclusions.
|
||||||
The internal `_kicost_dnp` filter emulates KiCost's `dnp` field.
|
The internal `_kicost_dnp` filter emulates KiCost's `dnp` field.
|
||||||
* Valid keys:
|
* Valid keys:
|
||||||
- `comment`: [string=''] A comment for documentation purposes.
|
- `comment`: [string=''] A comment for documentation purposes.
|
||||||
- `config_field`: [string='Config'] Name of the field used to clasify components.
|
- `config_field`: [string='Config'] Name of the field used to classify components.
|
||||||
- `config_separators`: [string=' ,'] Characters used to separate options inside the config field.
|
- `config_separators`: [string=' ,'] Characters used to separate options inside the config field.
|
||||||
- `exclude_all_hash_ref`: [boolean=false] Exclude all components with a reference starting with #.
|
- `exclude_all_hash_ref`: [boolean=false] Exclude all components with a reference starting with #.
|
||||||
- `exclude_any`: [list(dict)] A series of regular expressions used to exclude parts.
|
- `exclude_any`: [list(dict)] A series of regular expressions used to exclude parts.
|
||||||
|
|
@ -495,12 +495,12 @@ Currently the only type available is `generic`.
|
||||||
- `split_fields`: [list(string)] List of fields to split, usually the distributors part numbers.
|
- `split_fields`: [list(string)] List of fields to split, usually the distributors part numbers.
|
||||||
- `split_fields_expand`: [boolean=false] When `true` the fields in `split_fields` are added to the internal names.
|
- `split_fields_expand`: [boolean=false] When `true` the fields in `split_fields` are added to the internal names.
|
||||||
- `use_ref_sep_for_first`: [boolean=true] Force the reference separator use even for the first component in the list (KiCost behavior).
|
- `use_ref_sep_for_first`: [boolean=true] Force the reference separator use even for the first component in the list (KiCost behavior).
|
||||||
- `value_alt_field`: [string='value_subparts'] Field containing replacements for the `Value` field. So we get real values for splitted parts.
|
- `value_alt_field`: [string='value_subparts'] Field containing replacements for the `Value` field. So we get real values for split parts.
|
||||||
- var_rename: Var_Rename
|
- var_rename: Var_Rename
|
||||||
This filter implements the VARIANT:FIELD=VALUE renamer to get FIELD=VALUE when VARIANT is in use.
|
This filter implements the VARIANT:FIELD=VALUE renamer to get FIELD=VALUE when VARIANT is in use.
|
||||||
* Valid keys:
|
* Valid keys:
|
||||||
- `comment`: [string=''] A comment for documentation purposes.
|
- `comment`: [string=''] A comment for documentation purposes.
|
||||||
- `force_variant`: [string=''] Use this variant instead of the current variant. Usefull for IBoM variants.
|
- `force_variant`: [string=''] Use this variant instead of the current variant. Useful for IBoM variants.
|
||||||
- `name`: [string=''] Used to identify this particular filter definition.
|
- `name`: [string=''] Used to identify this particular filter definition.
|
||||||
- `separator`: [string=':'] Separator used between the variant and the field name.
|
- `separator`: [string=':'] Separator used between the variant and the field name.
|
||||||
- `variant_to_value`: [boolean=false] Rename fields matching the variant to the value of the component.
|
- `variant_to_value`: [boolean=false] Rename fields matching the variant to the value of the component.
|
||||||
|
|
@ -802,7 +802,7 @@ Next time you need this list just use an alias, like this:
|
||||||
* Valid keys:
|
* Valid keys:
|
||||||
- `file`: [string=''] Name of the schematic to aggregate.
|
- `file`: [string=''] Name of the schematic to aggregate.
|
||||||
- `name`: [string=''] Name to identify this source. If empty we use the name of the schematic.
|
- `name`: [string=''] Name to identify this source. If empty we use the name of the schematic.
|
||||||
- `number`: [number=1] Number of boards to build (components multiplier). Use negative to substract.
|
- `number`: [number=1] Number of boards to build (components multiplier). Use negative to subtract.
|
||||||
- `ref_id`: [string=''] A prefix to add to all the references from this project.
|
- `ref_id`: [string=''] A prefix to add to all the references from this project.
|
||||||
- `angle_positive`: [boolean=true] Always use positive values for the footprint rotation.
|
- `angle_positive`: [boolean=true] Always use positive values for the footprint rotation.
|
||||||
- `bottom_negative_x`: [boolean=false] Use negative X coordinates for footprints on bottom layer (for XYRS).
|
- `bottom_negative_x`: [boolean=false] Use negative X coordinates for footprints on bottom layer (for XYRS).
|
||||||
|
|
@ -1155,7 +1155,7 @@ Next time you need this list just use an alias, like this:
|
||||||
- `plot_footprint_refs`: [boolean=true] Include the footprint references.
|
- `plot_footprint_refs`: [boolean=true] Include the footprint references.
|
||||||
- `plot_footprint_values`: [boolean=true] Include the footprint values.
|
- `plot_footprint_values`: [boolean=true] Include the footprint values.
|
||||||
- `plot_sheet_reference`: [boolean=false] Currently without effect.
|
- `plot_sheet_reference`: [boolean=false] Currently without effect.
|
||||||
- `subtract_mask_from_silk`: [boolean=false] Substract the solder mask from the silk screen.
|
- `subtract_mask_from_silk`: [boolean=false] Subtract the solder mask from the silk screen.
|
||||||
- `tent_vias`: [boolean=true] Cover the vias.
|
- `tent_vias`: [boolean=true] Cover the vias.
|
||||||
- `uppercase_extensions`: [boolean=false] Use uppercase names for the extensions.
|
- `uppercase_extensions`: [boolean=false] Use uppercase names for the extensions.
|
||||||
- `use_aux_axis_as_origin`: [boolean=false] Use the auxiliary axis as origin for coordinates.
|
- `use_aux_axis_as_origin`: [boolean=false] Use the auxiliary axis as origin for coordinates.
|
||||||
|
|
@ -1486,7 +1486,7 @@ Next time you need this list just use an alias, like this:
|
||||||
|
|
||||||
* PDF (Portable Document Format)
|
* PDF (Portable Document Format)
|
||||||
* Type: `pdf`
|
* Type: `pdf`
|
||||||
* Description: Exports the PCB to the most common exhange format. Suitable for printing.
|
* Description: Exports the PCB to the most common exchange format. Suitable for printing.
|
||||||
Note that this output isn't the best for documating your project.
|
Note that this output isn't the best for documating your project.
|
||||||
This output is what you get from the File/Plot menu in pcbnew.
|
This output is what you get from the File/Plot menu in pcbnew.
|
||||||
* Valid keys:
|
* Valid keys:
|
||||||
|
|
@ -1557,7 +1557,7 @@ Next time you need this list just use an alias, like this:
|
||||||
|
|
||||||
* PDF PCB Print (Portable Document Format)
|
* PDF PCB Print (Portable Document Format)
|
||||||
* Type: `pdf_pcb_print`
|
* Type: `pdf_pcb_print`
|
||||||
* Description: Exports the PCB to the most common exhange format. Suitable for printing.
|
* Description: Exports the PCB to the most common exchange format. Suitable for printing.
|
||||||
This is the main format to document your PCB.
|
This is the main format to document your PCB.
|
||||||
This output is what you get from the 'File/Print' menu in pcbnew.
|
This output is what you get from the 'File/Print' menu in pcbnew.
|
||||||
* Valid keys:
|
* Valid keys:
|
||||||
|
|
@ -1598,7 +1598,7 @@ Next time you need this list just use an alias, like this:
|
||||||
|
|
||||||
* PDF Schematic Print (Portable Document Format)
|
* PDF Schematic Print (Portable Document Format)
|
||||||
* Type: `pdf_sch_print`
|
* Type: `pdf_sch_print`
|
||||||
* Description: Exports the PCB to the most common exhange format. Suitable for printing.
|
* Description: Exports the PCB to the most common exchange format. Suitable for printing.
|
||||||
This is the main format to document your schematic.
|
This is the main format to document your schematic.
|
||||||
This output is what you get from the 'File/Print' menu in eeschema.
|
This output is what you get from the 'File/Print' menu in eeschema.
|
||||||
* Valid keys:
|
* Valid keys:
|
||||||
|
|
@ -1624,7 +1624,7 @@ Next time you need this list just use an alias, like this:
|
||||||
* Pick & place
|
* Pick & place
|
||||||
* Type: `position`
|
* Type: `position`
|
||||||
* Description: Generates the file with position information for the PCB components, used by the pick and place machine.
|
* Description: Generates the file with position information for the PCB components, used by the pick and place machine.
|
||||||
This output is what you get from the 'File/Fabrication output/Footprint poistion (.pos) file' menu in pcbnew.
|
This output is what you get from the 'File/Fabrication output/Footprint position (.pos) file' menu in pcbnew.
|
||||||
* Valid keys:
|
* Valid keys:
|
||||||
- `comment`: [string=''] A comment for documentation purposes.
|
- `comment`: [string=''] A comment for documentation purposes.
|
||||||
- `dir`: [string='./'] Output directory for the generated files. If it starts with `+` the rest is concatenated to the default dir.
|
- `dir`: [string='./'] Output directory for the generated files. If it starts with `+` the rest is concatenated to the default dir.
|
||||||
|
|
@ -1639,7 +1639,7 @@ Next time you need this list just use an alias, like this:
|
||||||
- `columns`: [list(dict)|list(string)] Which columns are included in the output.
|
- `columns`: [list(dict)|list(string)] Which columns are included in the output.
|
||||||
* Valid keys:
|
* Valid keys:
|
||||||
- `id`: [string=''] [Ref,Val,Package,PosX,PosY,Rot,Side] Internal name.
|
- `id`: [string=''] [Ref,Val,Package,PosX,PosY,Rot,Side] Internal name.
|
||||||
- `name`: [string=''] Name to use in the outut file. The id is used when empty.
|
- `name`: [string=''] Name to use in the output file. The id is used when empty.
|
||||||
- `dnf_filter`: [string|list(string)='_none'] Name of the filter to mark components as not fitted.
|
- `dnf_filter`: [string|list(string)='_none'] Name of the filter to mark components as not fitted.
|
||||||
A short-cut to use for simple cases where a variant is an overkill.
|
A short-cut to use for simple cases where a variant is an overkill.
|
||||||
- `format`: [string='ASCII'] [ASCII,CSV] Format for the position file.
|
- `format`: [string='ASCII'] [ASCII,CSV] Format for the position file.
|
||||||
|
|
@ -2136,7 +2136,7 @@ But looking at the 1 k resistors is harder. We have 80, three from *merge_1*, on
|
||||||
So we have 10*3+20*3+30=120, this is clear, but the BoM says they are R1-R3 R2-R4 R5, which is a little bit confusing.
|
So we have 10*3+20*3+30=120, this is clear, but the BoM says they are R1-R3 R2-R4 R5, which is a little bit confusing.
|
||||||
In this simple example is easy to correlate R1-R3 to *merge_1*, R2-R4 to *merge_2* and R5 to *merge_1*.
|
In this simple example is easy to correlate R1-R3 to *merge_1*, R2-R4 to *merge_2* and R5 to *merge_1*.
|
||||||
For bigger projects this gets harder.
|
For bigger projects this gets harder.
|
||||||
Lets assing an *id* to each project, we'll use 'A' for *merge_1*, 'B' for *merge_2* and 'C' for *merge_3*:
|
Lets assign an *id* to each project, we'll use 'A' for *merge_1*, 'B' for *merge_2* and 'C' for *merge_3*:
|
||||||
|
|
||||||
```yaml
|
```yaml
|
||||||
kibot:
|
kibot:
|
||||||
|
|
@ -2278,7 +2278,7 @@ import:
|
||||||
```
|
```
|
||||||
|
|
||||||
This will import all outputs and filters, but not variants or globals.
|
This will import all outputs and filters, but not variants or globals.
|
||||||
Also note that imported globals has more precendence than the ones defined in the same file.
|
Also note that imported globals has more precedence than the ones defined in the same file.
|
||||||
|
|
||||||
## Usage
|
## Usage
|
||||||
|
|
||||||
|
|
@ -2321,7 +2321,7 @@ pcb_files:
|
||||||
kibot -b $(PCB) -c $(KIBOT_CFG)
|
kibot -b $(PCB) -c $(KIBOT_CFG)
|
||||||
```
|
```
|
||||||
|
|
||||||
If you need to supress messages use `--quiet` or `-q` and if you need to get more informatio about what's going on use `--verbose` or `-v`.
|
If you need to suppress messages use `--quiet` or `-q` and if you need to get more information about what's going on use `--verbose` or `-v`.
|
||||||
|
|
||||||
If you want to generate only some of the outputs use:
|
If you want to generate only some of the outputs use:
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -8,7 +8,7 @@
|
||||||
|
|
||||||
**Important for KiCad 6 users**:
|
**Important for KiCad 6 users**:
|
||||||
- Only the code in the git repo supports KiCad 6 (no stable release yet)
|
- Only the code in the git repo supports KiCad 6 (no stable release yet)
|
||||||
- The docker images taget `ki6` has KiCad 6, but you need to use the KiBot from the repo, not the one in the images.
|
- The docker images target `ki6` has KiCad 6, but you need to use the KiBot from the repo, not the one in the images.
|
||||||
- The docker image with KiCad 6 and KiBot that supports it is tagged as `dev_k6`
|
- The docker image with KiCad 6 and KiBot that supports it is tagged as `dev_k6`
|
||||||
- The GitHub action with KiCad 6 support is tagged as `v1_k6`
|
- The GitHub action with KiCad 6 support is tagged as `v1_k6`
|
||||||
- When using KiCad 6 you must migrate the whole project and pass the migrated files to KiBot.
|
- When using KiCad 6 you must migrate the whole project and pass the migrated files to KiBot.
|
||||||
|
|
@ -254,7 +254,7 @@ This selection isn't stored in the PCB file. The global `units` value is used by
|
||||||
|
|
||||||
#### Output directory option
|
#### Output directory option
|
||||||
|
|
||||||
The `out_dir` option can define the base outut directory. This is the same as the `-d`/`--out-dir` command line option.
|
The `out_dir` option can define the base output directory. This is the same as the `-d`/`--out-dir` command line option.
|
||||||
Note that the command line option has precedence over it.
|
Note that the command line option has precedence over it.
|
||||||
|
|
||||||
Expansion patterns are applied to this value, but you should avoid using patterns that expand according to the context, i.e. **%c**, **%d**, **%f**, **%F**, **%p** and **%r**.
|
Expansion patterns are applied to this value, but you should avoid using patterns that expand according to the context, i.e. **%c**, **%d**, **%f**, **%F**, **%p** and **%r**.
|
||||||
|
|
@ -314,7 +314,7 @@ Both concepts are closely related. In fact variants can use filters.
|
||||||
The current implementation of the filters allow to exclude components from some of the processing stages. The most common use is to exclude them from some output.
|
The current implementation of the filters allow to exclude components from some of the processing stages. The most common use is to exclude them from some output.
|
||||||
In the future more advanced filters will allow modification of component details.
|
In the future more advanced filters will allow modification of component details.
|
||||||
|
|
||||||
Variants are currently used to create *assembly variants*. This concept is used to manufature one PCB used for various products.
|
Variants are currently used to create *assembly variants*. This concept is used to manufacture one PCB used for various products.
|
||||||
You can learn more about KiBot variants on the following [example repo](https://inti-cmnb.github.io/kibot_variants_arduprog/).
|
You can learn more about KiBot variants on the following [example repo](https://inti-cmnb.github.io/kibot_variants_arduprog/).
|
||||||
|
|
||||||
As mentioned above the current use of filters is to mark some components. Mainly to exclude them, but also to mark them as special.
|
As mentioned above the current use of filters is to mark some components. Mainly to exclude them, but also to mark them as special.
|
||||||
|
|
@ -777,7 +777,7 @@ But looking at the 1 k resistors is harder. We have 80, three from *merge_1*, on
|
||||||
So we have 10*3+20*3+30=120, this is clear, but the BoM says they are R1-R3 R2-R4 R5, which is a little bit confusing.
|
So we have 10*3+20*3+30=120, this is clear, but the BoM says they are R1-R3 R2-R4 R5, which is a little bit confusing.
|
||||||
In this simple example is easy to correlate R1-R3 to *merge_1*, R2-R4 to *merge_2* and R5 to *merge_1*.
|
In this simple example is easy to correlate R1-R3 to *merge_1*, R2-R4 to *merge_2* and R5 to *merge_1*.
|
||||||
For bigger projects this gets harder.
|
For bigger projects this gets harder.
|
||||||
Lets assing an *id* to each project, we'll use 'A' for *merge_1*, 'B' for *merge_2* and 'C' for *merge_3*:
|
Lets assign an *id* to each project, we'll use 'A' for *merge_1*, 'B' for *merge_2* and 'C' for *merge_3*:
|
||||||
|
|
||||||
```yaml
|
```yaml
|
||||||
kibot:
|
kibot:
|
||||||
|
|
@ -919,7 +919,7 @@ import:
|
||||||
```
|
```
|
||||||
|
|
||||||
This will import all outputs and filters, but not variants or globals.
|
This will import all outputs and filters, but not variants or globals.
|
||||||
Also note that imported globals has more precendence than the ones defined in the same file.
|
Also note that imported globals has more precedence than the ones defined in the same file.
|
||||||
|
|
||||||
## Usage
|
## Usage
|
||||||
|
|
||||||
|
|
@ -962,7 +962,7 @@ pcb_files:
|
||||||
kibot -b $(PCB) -c $(KIBOT_CFG)
|
kibot -b $(PCB) -c $(KIBOT_CFG)
|
||||||
```
|
```
|
||||||
|
|
||||||
If you need to supress messages use `--quiet` or `-q` and if you need to get more informatio about what's going on use `--verbose` or `-v`.
|
If you need to suppress messages use `--quiet` or `-q` and if you need to get more information about what's going on use `--verbose` or `-v`.
|
||||||
|
|
||||||
If you want to generate only some of the outputs use:
|
If you want to generate only some of the outputs use:
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -82,7 +82,7 @@ outputs:
|
||||||
- file: ''
|
- file: ''
|
||||||
# [string=''] Name to identify this source. If empty we use the name of the schematic
|
# [string=''] Name to identify this source. If empty we use the name of the schematic
|
||||||
name: ''
|
name: ''
|
||||||
# [number=1] Number of boards to build (components multiplier). Use negative to substract
|
# [number=1] Number of boards to build (components multiplier). Use negative to subtract
|
||||||
number: 1
|
number: 1
|
||||||
# [string=''] A prefix to add to all the references from this project
|
# [string=''] A prefix to add to all the references from this project
|
||||||
ref_id: ''
|
ref_id: ''
|
||||||
|
|
@ -517,7 +517,7 @@ outputs:
|
||||||
plot_footprint_values: true
|
plot_footprint_values: true
|
||||||
# [boolean=false] Currently without effect
|
# [boolean=false] Currently without effect
|
||||||
plot_sheet_reference: false
|
plot_sheet_reference: false
|
||||||
# [boolean=false] Substract the solder mask from the silk screen
|
# [boolean=false] Subtract the solder mask from the silk screen
|
||||||
subtract_mask_from_silk: false
|
subtract_mask_from_silk: false
|
||||||
# [boolean=true] Cover the vias
|
# [boolean=true] Cover the vias
|
||||||
tent_vias: true
|
tent_vias: true
|
||||||
|
|
@ -925,7 +925,7 @@ outputs:
|
||||||
# Note that this output isn't the best for documating your project.
|
# Note that this output isn't the best for documating your project.
|
||||||
# This output is what you get from the File/Plot menu in pcbnew.
|
# This output is what you get from the File/Plot menu in pcbnew.
|
||||||
- name: 'pdf_example'
|
- name: 'pdf_example'
|
||||||
comment: 'Exports the PCB to the most common exhange format. Suitable for printing.'
|
comment: 'Exports the PCB to the most common exchange format. Suitable for printing.'
|
||||||
type: 'pdf'
|
type: 'pdf'
|
||||||
dir: 'Example/pdf_dir'
|
dir: 'Example/pdf_dir'
|
||||||
options:
|
options:
|
||||||
|
|
@ -978,7 +978,7 @@ outputs:
|
||||||
# This is the main format to document your PCB.
|
# This is the main format to document your PCB.
|
||||||
# This output is what you get from the 'File/Print' menu in pcbnew.
|
# This output is what you get from the 'File/Print' menu in pcbnew.
|
||||||
- name: 'pdf_pcb_print_example'
|
- name: 'pdf_pcb_print_example'
|
||||||
comment: 'Exports the PCB to the most common exhange format. Suitable for printing.'
|
comment: 'Exports the PCB to the most common exchange format. Suitable for printing.'
|
||||||
type: 'pdf_pcb_print'
|
type: 'pdf_pcb_print'
|
||||||
dir: 'Example/pdf_pcb_print_dir'
|
dir: 'Example/pdf_pcb_print_dir'
|
||||||
options:
|
options:
|
||||||
|
|
@ -1016,7 +1016,7 @@ outputs:
|
||||||
# This is the main format to document your schematic.
|
# This is the main format to document your schematic.
|
||||||
# This output is what you get from the 'File/Print' menu in eeschema.
|
# This output is what you get from the 'File/Print' menu in eeschema.
|
||||||
- name: 'pdf_sch_print_example'
|
- name: 'pdf_sch_print_example'
|
||||||
comment: 'Exports the PCB to the most common exhange format. Suitable for printing.'
|
comment: 'Exports the PCB to the most common exchange format. Suitable for printing.'
|
||||||
type: 'pdf_sch_print'
|
type: 'pdf_sch_print'
|
||||||
dir: 'Example/pdf_sch_print_dir'
|
dir: 'Example/pdf_sch_print_dir'
|
||||||
options:
|
options:
|
||||||
|
|
@ -1033,7 +1033,7 @@ outputs:
|
||||||
# Not fitted components are crossed
|
# Not fitted components are crossed
|
||||||
variant: ''
|
variant: ''
|
||||||
# Pick & place:
|
# Pick & place:
|
||||||
# This output is what you get from the 'File/Fabrication output/Footprint poistion (.pos) file' menu in pcbnew.
|
# This output is what you get from the 'File/Fabrication output/Footprint position (.pos) file' menu in pcbnew.
|
||||||
- name: 'position_example'
|
- name: 'position_example'
|
||||||
comment: 'Generates the file with position information for the PCB components, used by the pick and place machine.'
|
comment: 'Generates the file with position information for the PCB components, used by the pick and place machine.'
|
||||||
type: 'position'
|
type: 'position'
|
||||||
|
|
@ -1045,7 +1045,7 @@ outputs:
|
||||||
columns:
|
columns:
|
||||||
# [string=''] [Ref,Val,Package,PosX,PosY,Rot,Side] Internal name
|
# [string=''] [Ref,Val,Package,PosX,PosY,Rot,Side] Internal name
|
||||||
- id: 'Ref'
|
- id: 'Ref'
|
||||||
# [string=''] Name to use in the outut file. The id is used when empty
|
# [string=''] Name to use in the output file. The id is used when empty
|
||||||
name: 'Reference'
|
name: 'Reference'
|
||||||
# [string|list(string)='_none'] Name of the filter to mark components as not fitted.
|
# [string|list(string)='_none'] Name of the filter to mark components as not fitted.
|
||||||
# A short-cut to use for simple cases where a variant is an overkill
|
# A short-cut to use for simple cases where a variant is an overkill
|
||||||
|
|
|
||||||
|
|
@ -280,7 +280,7 @@ def detect_kicad():
|
||||||
# Bug in KiCad (#6989), prints to stderr:
|
# Bug in KiCad (#6989), prints to stderr:
|
||||||
# `../src/common/stdpbase.cpp(62): assert "traits" failed in Get(test_dir): create wxApp before calling this`
|
# `../src/common/stdpbase.cpp(62): assert "traits" failed in Get(test_dir): create wxApp before calling this`
|
||||||
# Found in KiCad 5.1.8, 5.1.9
|
# Found in KiCad 5.1.8, 5.1.9
|
||||||
# So we temporarily supress stderr
|
# So we temporarily suppress stderr
|
||||||
with hide_stderr():
|
with hide_stderr():
|
||||||
GS.kicad_conf_path = pcbnew.GetKicadConfigPath()
|
GS.kicad_conf_path = pcbnew.GetKicadConfigPath()
|
||||||
GS.pro_ext = '.pro'
|
GS.pro_ext = '.pro'
|
||||||
|
|
|
||||||
|
|
@ -182,7 +182,7 @@ class ComponentGroup(object):
|
||||||
def add_component(self, c):
|
def add_component(self, c):
|
||||||
""" Add a component to the group.
|
""" Add a component to the group.
|
||||||
Avoid repetition, checks if suitable.
|
Avoid repetition, checks if suitable.
|
||||||
Note: repeated components happend when a component contains more than one unit """
|
Note: repeated components happens when a component contains more than one unit """
|
||||||
if not self.components:
|
if not self.components:
|
||||||
self.components.append(c)
|
self.components.append(c)
|
||||||
self.refs[c.ref+c.project] = c
|
self.refs[c.ref+c.project] = c
|
||||||
|
|
@ -382,7 +382,7 @@ def get_value_sort(comp, fallback_ref=False):
|
||||||
if res:
|
if res:
|
||||||
value, (mult, mult_s), unit = res
|
value, (mult, mult_s), unit = res
|
||||||
if comp.ref_prefix in "CL":
|
if comp.ref_prefix in "CL":
|
||||||
# fempto Farads
|
# femto Farads
|
||||||
value = "{0:15d}".format(int(value * 1e15 * mult + 0.1))
|
value = "{0:15d}".format(int(value * 1e15 * mult + 0.1))
|
||||||
else:
|
else:
|
||||||
# milli Ohms
|
# milli Ohms
|
||||||
|
|
@ -473,7 +473,7 @@ def group_components(cfg, components):
|
||||||
y_origin = 0.0
|
y_origin = 0.0
|
||||||
if cfg.use_aux_axis_as_origin:
|
if cfg.use_aux_axis_as_origin:
|
||||||
(x_origin, y_origin) = GS.get_aux_origin()
|
(x_origin, y_origin) = GS.get_aux_origin()
|
||||||
logger.debug('Using auxiliar origin: x={} y={}'.format(x_origin, y_origin))
|
logger.debug('Using auxiliary origin: x={} y={}'.format(x_origin, y_origin))
|
||||||
# Process the groups
|
# Process the groups
|
||||||
for g in groups:
|
for g in groups:
|
||||||
# Sort the references within each group
|
# Sort the references within each group
|
||||||
|
|
|
||||||
|
|
@ -72,7 +72,7 @@ def write_csv(filename, ext, groups, headings, head_names, cfg):
|
||||||
head_names = [list of headings to display in the BoM file]
|
head_names = [list of headings to display in the BoM file]
|
||||||
cfg = BoMOptions object with all the configuration
|
cfg = BoMOptions object with all the configuration
|
||||||
"""
|
"""
|
||||||
# Delimeter is assumed from file extension
|
# Delimiter is assumed from file extension
|
||||||
# Override delimiter if separator specified
|
# Override delimiter if separator specified
|
||||||
if ext == "csv" and cfg.csv.separator:
|
if ext == "csv" and cfg.csv.separator:
|
||||||
delimiter = cfg.csv.separator
|
delimiter = cfg.csv.separator
|
||||||
|
|
|
||||||
|
|
@ -90,7 +90,7 @@ def get_prefix(prefix):
|
||||||
if prefix in PREFIX_GIGA:
|
if prefix in PREFIX_GIGA:
|
||||||
return 1.0e9, 'G'
|
return 1.0e9, 'G'
|
||||||
# Unknown, we shouldn't get here because the regex matched
|
# Unknown, we shouldn't get here because the regex matched
|
||||||
# BUT: I found that sometimes unexpected things happend, like mu matching micro and then we reaching this code
|
# BUT: I found that sometimes unexpected things happen, like mu matching micro and then we reaching this code
|
||||||
# Now is fixed, but I can't be sure some bizarre case is overlooked
|
# Now is fixed, but I can't be sure some bizarre case is overlooked
|
||||||
logger.error('Unknown prefix, please report')
|
logger.error('Unknown prefix, please report')
|
||||||
return 1, ''
|
return 1, ''
|
||||||
|
|
@ -193,7 +193,7 @@ def compare_values(c1, c2):
|
||||||
# Values match
|
# Values match
|
||||||
if u1 == u2:
|
if u1 == u2:
|
||||||
return True # Units match
|
return True # Units match
|
||||||
# No longer posible because now we use the prefix to determine absent units
|
# No longer possible because now we use the prefix to determine absent units
|
||||||
# if not u1:
|
# if not u1:
|
||||||
# return True # No units for component 1
|
# return True # No units for component 1
|
||||||
# if not u2:
|
# if not u2:
|
||||||
|
|
|
||||||
|
|
@ -240,12 +240,12 @@ class CfgYamlReader(object):
|
||||||
if outs is None and explicit_outs and 'outputs' not in data:
|
if outs is None and explicit_outs and 'outputs' not in data:
|
||||||
logger.warning(W_NOOUTPUTS+"No outputs found in `{}`".format(fn_rel))
|
logger.warning(W_NOOUTPUTS+"No outputs found in `{}`".format(fn_rel))
|
||||||
|
|
||||||
def _parse_import_filters(self, fils, explicit_fils, fn_rel, data):
|
def _parse_import_filters(self, filters, explicit_fils, fn_rel, data):
|
||||||
if (fils is None or len(fils) > 0) and 'filters' in data:
|
if (filters is None or len(filters) > 0) and 'filters' in data:
|
||||||
i_fils = self._parse_filters(data['filters'])
|
i_fils = self._parse_filters(data['filters'])
|
||||||
if fils is not None:
|
if filters is not None:
|
||||||
sel_fils = {}
|
sel_fils = {}
|
||||||
for f in fils:
|
for f in filters:
|
||||||
if f in i_fils:
|
if f in i_fils:
|
||||||
sel_fils[f] = i_fils[f]
|
sel_fils[f] = i_fils[f]
|
||||||
else:
|
else:
|
||||||
|
|
@ -257,7 +257,7 @@ class CfgYamlReader(object):
|
||||||
else:
|
else:
|
||||||
RegOutput.add_filters(sel_fils)
|
RegOutput.add_filters(sel_fils)
|
||||||
logger.debug('Filters loaded from `{}`: {}'.format(fn_rel, sel_fils.keys()))
|
logger.debug('Filters loaded from `{}`: {}'.format(fn_rel, sel_fils.keys()))
|
||||||
if fils is None and explicit_fils and 'filters' not in data:
|
if filters is None and explicit_fils and 'filters' not in data:
|
||||||
logger.warning(W_NOFILTERS+"No filters found in `{}`".format(fn_rel))
|
logger.warning(W_NOFILTERS+"No filters found in `{}`".format(fn_rel))
|
||||||
|
|
||||||
def _parse_import_variants(self, vars, explicit_vars, fn_rel, data):
|
def _parse_import_variants(self, vars, explicit_vars, fn_rel, data):
|
||||||
|
|
@ -313,7 +313,7 @@ class CfgYamlReader(object):
|
||||||
if isinstance(entry, str):
|
if isinstance(entry, str):
|
||||||
fn = entry
|
fn = entry
|
||||||
outs = None
|
outs = None
|
||||||
fils = []
|
filters = []
|
||||||
vars = []
|
vars = []
|
||||||
globals = []
|
globals = []
|
||||||
explicit_outs = True
|
explicit_outs = True
|
||||||
|
|
@ -321,7 +321,7 @@ class CfgYamlReader(object):
|
||||||
explicit_vars = False
|
explicit_vars = False
|
||||||
explicit_globals = False
|
explicit_globals = False
|
||||||
elif isinstance(entry, dict):
|
elif isinstance(entry, dict):
|
||||||
fn = outs = fils = vars = globals = None
|
fn = outs = filters = vars = globals = None
|
||||||
explicit_outs = explicit_fils = explicit_vars = explicit_globals = False
|
explicit_outs = explicit_fils = explicit_vars = explicit_globals = False
|
||||||
for k, v in entry.items():
|
for k, v in entry.items():
|
||||||
if k == 'file':
|
if k == 'file':
|
||||||
|
|
@ -332,7 +332,7 @@ class CfgYamlReader(object):
|
||||||
outs = self._parse_import_items('outputs', fn, v)
|
outs = self._parse_import_items('outputs', fn, v)
|
||||||
explicit_outs = True
|
explicit_outs = True
|
||||||
elif k == 'filters':
|
elif k == 'filters':
|
||||||
fils = self._parse_import_items('filters', fn, v)
|
filters = self._parse_import_items('filters', fn, v)
|
||||||
explicit_fils = True
|
explicit_fils = True
|
||||||
elif k == 'variants':
|
elif k == 'variants':
|
||||||
vars = self._parse_import_items('variants', fn, v)
|
vars = self._parse_import_items('variants', fn, v)
|
||||||
|
|
@ -355,7 +355,7 @@ class CfgYamlReader(object):
|
||||||
# Outputs
|
# Outputs
|
||||||
self._parse_import_outputs(outs, explicit_outs, fn_rel, data)
|
self._parse_import_outputs(outs, explicit_outs, fn_rel, data)
|
||||||
# Filters
|
# Filters
|
||||||
self._parse_import_filters(fils, explicit_fils, fn_rel, data)
|
self._parse_import_filters(filters, explicit_fils, fn_rel, data)
|
||||||
# Variants
|
# Variants
|
||||||
self._parse_import_variants(vars, explicit_vars, fn_rel, data)
|
self._parse_import_variants(vars, explicit_vars, fn_rel, data)
|
||||||
# Globals
|
# Globals
|
||||||
|
|
@ -392,7 +392,7 @@ class CfgYamlReader(object):
|
||||||
# List of outputs
|
# List of outputs
|
||||||
version = None
|
version = None
|
||||||
globals_found = False
|
globals_found = False
|
||||||
# Analize each section
|
# Analyze each section
|
||||||
for k, v in data.items():
|
for k, v in data.items():
|
||||||
# logger.debug('{} {}'.format(k, v))
|
# logger.debug('{} {}'.format(k, v))
|
||||||
if k == 'kiplot' or k == 'kibot':
|
if k == 'kiplot' or k == 'kibot':
|
||||||
|
|
@ -490,7 +490,7 @@ def print_output_options(name, cl, indent):
|
||||||
ind_help = len(preface)*' '
|
ind_help = len(preface)*' '
|
||||||
for ln in range(1, clines):
|
for ln in range(1, clines):
|
||||||
text = lines[ln].strip()
|
text = lines[ln].strip()
|
||||||
# Dots at the beggining are replaced by spaces.
|
# Dots at the beginning are replaced by spaces.
|
||||||
# Used to keep indentation.
|
# Used to keep indentation.
|
||||||
if text[0] == '.':
|
if text[0] == '.':
|
||||||
for i in range(1, len(text)):
|
for i in range(1, len(text)):
|
||||||
|
|
@ -538,10 +538,10 @@ def print_output_help(name):
|
||||||
|
|
||||||
|
|
||||||
def print_preflights_help():
|
def print_preflights_help():
|
||||||
pres = BasePreFlight.get_registered()
|
prefs = BasePreFlight.get_registered()
|
||||||
logger.debug('{} supported preflights'.format(len(pres)))
|
logger.debug('{} supported preflights'.format(len(prefs)))
|
||||||
print('Supported preflight options:\n')
|
print('Supported preflight options:\n')
|
||||||
for n, o in OrderedDict(sorted(pres.items())).items():
|
for n, o in OrderedDict(sorted(prefs.items())).items():
|
||||||
help, options = o.get_doc()
|
help, options = o.get_doc()
|
||||||
if help is None:
|
if help is None:
|
||||||
help = 'Undocumented'
|
help = 'Undocumented'
|
||||||
|
|
@ -551,10 +551,10 @@ def print_preflights_help():
|
||||||
|
|
||||||
|
|
||||||
def print_filters_help():
|
def print_filters_help():
|
||||||
fils = RegFilter.get_registered()
|
filters = RegFilter.get_registered()
|
||||||
logger.debug('{} supported filters'.format(len(fils)))
|
logger.debug('{} supported filters'.format(len(filters)))
|
||||||
print('Supported filters:\n')
|
print('Supported filters:\n')
|
||||||
for n, o in OrderedDict(sorted(fils.items())).items():
|
for n, o in OrderedDict(sorted(filters.items())).items():
|
||||||
help = o.__doc__
|
help = o.__doc__
|
||||||
if help is None:
|
if help is None:
|
||||||
help = 'Undocumented'
|
help = 'Undocumented'
|
||||||
|
|
@ -582,7 +582,7 @@ def print_example_options(f, cls, name, indent, po, is_list=False):
|
||||||
if help:
|
if help:
|
||||||
help_lines = help.split('\n')
|
help_lines = help.split('\n')
|
||||||
for hl in help_lines:
|
for hl in help_lines:
|
||||||
# Dots at the beggining are replaced by spaces.
|
# Dots at the beginning are replaced by spaces.
|
||||||
# Used to keep indentation.
|
# Used to keep indentation.
|
||||||
hl = hl.strip()
|
hl = hl.strip()
|
||||||
if hl[0] == '.':
|
if hl[0] == '.':
|
||||||
|
|
@ -641,8 +641,8 @@ def create_example(pcb_file, out_dir, copy_options, copy_expand):
|
||||||
f.write('kibot:\n version: 1\n')
|
f.write('kibot:\n version: 1\n')
|
||||||
# Preflights
|
# Preflights
|
||||||
f.write('\npreflight:\n')
|
f.write('\npreflight:\n')
|
||||||
pres = BasePreFlight.get_registered()
|
prefs = BasePreFlight.get_registered()
|
||||||
for n, o in OrderedDict(sorted(pres.items())).items():
|
for n, o in OrderedDict(sorted(prefs.items())).items():
|
||||||
if o.__doc__:
|
if o.__doc__:
|
||||||
lines = trim(o.__doc__.rstrip()+'.')
|
lines = trim(o.__doc__.rstrip()+'.')
|
||||||
for ln in lines:
|
for ln in lines:
|
||||||
|
|
|
||||||
|
|
@ -30,7 +30,7 @@ class Generic(BaseFilter): # noqa: F821
|
||||||
""" Generic filter
|
""" Generic filter
|
||||||
This filter is based on regular expressions.
|
This filter is based on regular expressions.
|
||||||
It also provides some shortcuts for common situations.
|
It also provides some shortcuts for common situations.
|
||||||
Note that matches aren't case sensitive and spaces at the beggining and the end are removed.
|
Note that matches aren't case sensitive and spaces at the beginning and the end are removed.
|
||||||
The internal `_mechanical` filter emulates the KiBoM behavior for default exclusions.
|
The internal `_mechanical` filter emulates the KiBoM behavior for default exclusions.
|
||||||
The internal `_kicost_dnp` filter emulates KiCost's `dnp` field """
|
The internal `_kicost_dnp` filter emulates KiCost's `dnp` field """
|
||||||
def __init__(self):
|
def __init__(self):
|
||||||
|
|
@ -53,7 +53,7 @@ class Generic(BaseFilter): # noqa: F821
|
||||||
self.exclude_value = False
|
self.exclude_value = False
|
||||||
""" Exclude components if their 'Value' is any of the keys """
|
""" Exclude components if their 'Value' is any of the keys """
|
||||||
self.config_field = 'Config'
|
self.config_field = 'Config'
|
||||||
""" Name of the field used to clasify components """
|
""" Name of the field used to classify components """
|
||||||
self.config_separators = ' ,'
|
self.config_separators = ' ,'
|
||||||
""" Characters used to separate options inside the config field """
|
""" Characters used to separate options inside the config field """
|
||||||
self.exclude_config = False
|
self.exclude_config = False
|
||||||
|
|
|
||||||
|
|
@ -58,7 +58,7 @@ class Subparts(BaseFilter): # noqa: F821
|
||||||
self.use_ref_sep_for_first = True
|
self.use_ref_sep_for_first = True
|
||||||
""" Force the reference separator use even for the first component in the list (KiCost behavior) """
|
""" Force the reference separator use even for the first component in the list (KiCost behavior) """
|
||||||
self.value_alt_field = 'value_subparts'
|
self.value_alt_field = 'value_subparts'
|
||||||
""" Field containing replacements for the `Value` field. So we get real values for splitted parts """
|
""" Field containing replacements for the `Value` field. So we get real values for split parts """
|
||||||
|
|
||||||
def config(self, parent):
|
def config(self, parent):
|
||||||
super().config(parent)
|
super().config(parent)
|
||||||
|
|
@ -139,13 +139,13 @@ class Subparts(BaseFilter): # noqa: F821
|
||||||
except ValueError:
|
except ValueError:
|
||||||
logger.error('Internal error qty_to_float("{}"), please report'.format(qty))
|
logger.error('Internal error qty_to_float("{}"), please report'.format(qty))
|
||||||
|
|
||||||
def do_split(self, comp, max_num_subparts, splitted_fields):
|
def do_split(self, comp, max_num_subparts, split_fields):
|
||||||
""" Split `comp` according to the detected subparts """
|
""" Split `comp` according to the detected subparts """
|
||||||
# Split it
|
# Split it
|
||||||
multi_part = max_num_subparts > 1
|
multi_part = max_num_subparts > 1
|
||||||
if multi_part and GS.debug_level > 1:
|
if multi_part and GS.debug_level > 1:
|
||||||
logger.debug("Splitting {} in {} subparts".format(comp.ref, max_num_subparts))
|
logger.debug("Splitting {} in {} subparts".format(comp.ref, max_num_subparts))
|
||||||
splitted = []
|
split = []
|
||||||
# Compute the total for the modified value
|
# Compute the total for the modified value
|
||||||
total_parts = max_num_subparts if self.modify_first_value else max_num_subparts-1
|
total_parts = max_num_subparts if self.modify_first_value else max_num_subparts-1
|
||||||
# Check if we have replacements for the `Value` field
|
# Check if we have replacements for the `Value` field
|
||||||
|
|
@ -161,7 +161,7 @@ class Subparts(BaseFilter): # noqa: F821
|
||||||
if self.use_ref_sep_for_first:
|
if self.use_ref_sep_for_first:
|
||||||
new_comp.ref = new_comp.ref+self.ref_sep+str(i+1)
|
new_comp.ref = new_comp.ref+self.ref_sep+str(i+1)
|
||||||
elif i > 0:
|
elif i > 0:
|
||||||
# I like it better. The first is usually the real component, the rest are accesories.
|
# I like it better. The first is usually the real component, the rest are accessories.
|
||||||
new_comp.ref = new_comp.ref+self.ref_sep+str(i)
|
new_comp.ref = new_comp.ref+self.ref_sep+str(i)
|
||||||
# Adjust the suffix to be "sort friendly"
|
# Adjust the suffix to be "sort friendly"
|
||||||
# Currently useless, but could help in the future
|
# Currently useless, but could help in the future
|
||||||
|
|
@ -180,10 +180,10 @@ class Subparts(BaseFilter): # noqa: F821
|
||||||
prev_qty = None
|
prev_qty = None
|
||||||
prev_field = None
|
prev_field = None
|
||||||
max_qty = 0
|
max_qty = 0
|
||||||
if not self.check_multiplier.intersection(splitted_fields):
|
if not self.check_multiplier.intersection(split_fields):
|
||||||
# No field to check for qty here, default to 1
|
# No field to check for qty here, default to 1
|
||||||
max_qty = 1
|
max_qty = 1
|
||||||
for field, values in splitted_fields.items():
|
for field, values in split_fields.items():
|
||||||
check_multiplier = field in self.check_multiplier
|
check_multiplier = field in self.check_multiplier
|
||||||
value = ''
|
value = ''
|
||||||
qty = '1'
|
qty = '1'
|
||||||
|
|
@ -203,23 +203,23 @@ class Subparts(BaseFilter): # noqa: F821
|
||||||
new_comp.set_field(field+'_qty', qty)
|
new_comp.set_field(field+'_qty', qty)
|
||||||
max_qty = max(max_qty, self.qty_to_float(qty))
|
max_qty = max(max_qty, self.qty_to_float(qty))
|
||||||
new_comp.qty = max_qty
|
new_comp.qty = max_qty
|
||||||
splitted.append(new_comp)
|
split.append(new_comp)
|
||||||
if not multi_part and int(max_qty) == 1:
|
if not multi_part and int(max_qty) == 1:
|
||||||
# No real split and no multiplier
|
# No real split and no multiplier
|
||||||
return
|
return
|
||||||
if GS.debug_level > 2:
|
if GS.debug_level > 2:
|
||||||
logger.debug('Old component: '+comp.ref+' '+str([str(f) for f in comp.fields]))
|
logger.debug('Old component: '+comp.ref+' '+str([str(f) for f in comp.fields]))
|
||||||
logger.debug('Fields to split: '+str(splitted_fields))
|
logger.debug('Fields to split: '+str(split_fields))
|
||||||
logger.debug('New components:')
|
logger.debug('New components:')
|
||||||
for c in splitted:
|
for c in split:
|
||||||
logger.debug(' '+c.ref+' '+str([str(f) for f in c.fields]))
|
logger.debug(' '+c.ref+' '+str([str(f) for f in c.fields]))
|
||||||
return splitted
|
return split
|
||||||
|
|
||||||
def filter(self, comp):
|
def filter(self, comp):
|
||||||
""" Look for fields containing `part1; mult:part2; etc.` """
|
""" Look for fields containing `part1; mult:part2; etc.` """
|
||||||
# Analyze how to split this component
|
# Analyze how to split this component
|
||||||
max_num_subparts = 0
|
max_num_subparts = 0
|
||||||
splitted_fields = {}
|
split_fields = {}
|
||||||
field_max = None
|
field_max = None
|
||||||
for field in self._fields:
|
for field in self._fields:
|
||||||
value = comp.get_field_value(field)
|
value = comp.get_field_value(field)
|
||||||
|
|
@ -227,16 +227,16 @@ class Subparts(BaseFilter): # noqa: F821
|
||||||
# Skip it if not used
|
# Skip it if not used
|
||||||
continue
|
continue
|
||||||
subparts = self.subpart_list(value)
|
subparts = self.subpart_list(value)
|
||||||
splitted_fields[field] = subparts
|
split_fields[field] = subparts
|
||||||
num_subparts = len(subparts)
|
num_subparts = len(subparts)
|
||||||
if num_subparts > max_num_subparts:
|
if num_subparts > max_num_subparts:
|
||||||
field_max = field
|
field_max = field
|
||||||
max_num_subparts = num_subparts
|
max_num_subparts = num_subparts
|
||||||
# Print a warning if this field has a different ammount
|
# Print a warning if this field has a different amount
|
||||||
if num_subparts != max_num_subparts:
|
if num_subparts != max_num_subparts:
|
||||||
logger.warning(W_NUMSUBPARTS+'Different subparts ammount on {r}, field {c} has {cn} and {lc} has {lcn}.'
|
logger.warning(W_NUMSUBPARTS+'Different subparts amount on {r}, field {c} has {cn} and {lc} has {lcn}.'
|
||||||
.format(r=comp.ref, c=field_max, cn=max_num_subparts, lc=field, lcn=num_subparts))
|
.format(r=comp.ref, c=field_max, cn=max_num_subparts, lc=field, lcn=num_subparts))
|
||||||
if len(splitted_fields) == 0:
|
if len(split_fields) == 0:
|
||||||
# Nothing to split
|
# Nothing to split
|
||||||
return
|
return
|
||||||
# Split the manufacturer name
|
# Split the manufacturer name
|
||||||
|
|
@ -253,6 +253,6 @@ class Subparts(BaseFilter): # noqa: F821
|
||||||
for i in range(len(manfs)-1):
|
for i in range(len(manfs)-1):
|
||||||
if manfs[i+1] == '~':
|
if manfs[i+1] == '~':
|
||||||
manfs[i+1] = manfs[i]
|
manfs[i+1] = manfs[i]
|
||||||
splitted_fields[self.manf_field] = manfs
|
split_fields[self.manf_field] = manfs
|
||||||
# Now do the work
|
# Now do the work
|
||||||
return self.do_split(comp, max_num_subparts, splitted_fields)
|
return self.do_split(comp, max_num_subparts, split_fields)
|
||||||
|
|
|
||||||
|
|
@ -26,7 +26,7 @@ class Var_Rename(BaseFilter): # noqa: F821
|
||||||
self.variant_to_value = False
|
self.variant_to_value = False
|
||||||
""" Rename fields matching the variant to the value of the component """
|
""" Rename fields matching the variant to the value of the component """
|
||||||
self.force_variant = ''
|
self.force_variant = ''
|
||||||
""" Use this variant instead of the current variant. Usefull for IBoM variants """
|
""" Use this variant instead of the current variant. Useful for IBoM variants """
|
||||||
|
|
||||||
def config(self, parent):
|
def config(self, parent):
|
||||||
super().config(parent)
|
super().config(parent)
|
||||||
|
|
|
||||||
|
|
@ -7,7 +7,7 @@ import os
|
||||||
try:
|
try:
|
||||||
import pcbnew
|
import pcbnew
|
||||||
except ImportError:
|
except ImportError:
|
||||||
# This is catched by __main__, ignore the error here
|
# This is caught by __main__, ignore the error here
|
||||||
class pcbnew(object):
|
class pcbnew(object):
|
||||||
pass
|
pass
|
||||||
from datetime import datetime, date
|
from datetime import datetime, date
|
||||||
|
|
|
||||||
|
|
@ -526,7 +526,7 @@ class Bracket(SExpBase):
|
||||||
|
|
||||||
def tosexp(self, tosexp=tosexp):
|
def tosexp(self, tosexp=tosexp):
|
||||||
bra = self._bra
|
bra = self._bra
|
||||||
ket = BRACKETS[self._bra]
|
ke = BRACKETS[self._bra]
|
||||||
c = ''
|
c = ''
|
||||||
for i, v in enumerate(self._val):
|
for i, v in enumerate(self._val):
|
||||||
v = tosexp(v)
|
v = tosexp(v)
|
||||||
|
|
@ -537,7 +537,7 @@ class Bracket(SExpBase):
|
||||||
# Avoid spaces at the end of lines
|
# Avoid spaces at the end of lines
|
||||||
c = c.rstrip(' ')
|
c = c.rstrip(' ')
|
||||||
c += v
|
c += v
|
||||||
return uformat("{0}{1}{2}", bra, c, ket)
|
return uformat("{0}{1}{2}", bra, c, ke)
|
||||||
|
|
||||||
|
|
||||||
def bracket(val, bra):
|
def bracket(val, bra):
|
||||||
|
|
|
||||||
|
|
@ -197,7 +197,7 @@ class DrawPoligon(object):
|
||||||
pol_re = re.compile(r'P\s+(\d+)\s+' # 0 Number of points
|
pol_re = re.compile(r'P\s+(\d+)\s+' # 0 Number of points
|
||||||
r'(\d+)\s+' # 1 Sub-part (0 == all)
|
r'(\d+)\s+' # 1 Sub-part (0 == all)
|
||||||
r'([012])\s+' # 2 Which representation (0 == both) for DeMorgan
|
r'([012])\s+' # 2 Which representation (0 == both) for DeMorgan
|
||||||
r'(-?\d+)\s+' # 3 Thickness (Components from 74xx.lib has poligons with -1000)
|
r'(-?\d+)\s+' # 3 Thickness (Components from 74xx.lib has polygons with -1000)
|
||||||
r'((?:-?\d+\s+)+)' # 4 The points
|
r'((?:-?\d+\s+)+)' # 4 The points
|
||||||
r'([NFf])') # 5 Normal, Filled
|
r'([NFf])') # 5 Normal, Filled
|
||||||
|
|
||||||
|
|
@ -208,7 +208,7 @@ class DrawPoligon(object):
|
||||||
def parse(line):
|
def parse(line):
|
||||||
m = DrawPoligon.pol_re.match(line)
|
m = DrawPoligon.pol_re.match(line)
|
||||||
if not m:
|
if not m:
|
||||||
logger.warning(W_BADPOLI + 'Unknown poligon definition `{}`'.format(line))
|
logger.warning(W_BADPOLI + 'Unknown polygon definition `{}`'.format(line))
|
||||||
return None
|
return None
|
||||||
pol = DrawPoligon()
|
pol = DrawPoligon()
|
||||||
g = m.groups()
|
g = m.groups()
|
||||||
|
|
@ -219,7 +219,7 @@ class DrawPoligon(object):
|
||||||
pol.fill = g[5]
|
pol.fill = g[5]
|
||||||
coords = _split_space(g[4])
|
coords = _split_space(g[4])
|
||||||
if len(coords) != 2*pol.points:
|
if len(coords) != 2*pol.points:
|
||||||
logger.warning(W_POLICOORDS + 'Expected {} coordinates and got {} in poligon'.format(2*pol.points, len(coords)))
|
logger.warning(W_POLICOORDS + 'Expected {} coordinates and got {} in polygon'.format(2*pol.points, len(coords)))
|
||||||
pol.points = int(len(coords)/2)
|
pol.points = int(len(coords)/2)
|
||||||
pol.coords = [int(c) for c in coords]
|
pol.coords = [int(c) for c in coords]
|
||||||
return pol
|
return pol
|
||||||
|
|
@ -873,7 +873,7 @@ class SchematicComponent(object):
|
||||||
- footprint_y: y position of the part in the pick & place.
|
- footprint_y: y position of the part in the pick & place.
|
||||||
- footprint_w: width of the footprint (pads only).
|
- footprint_w: width of the footprint (pads only).
|
||||||
- footprint_h: height of the footprint (pads only)
|
- footprint_h: height of the footprint (pads only)
|
||||||
- qty: ammount of this part used.
|
- qty: amount of this part used.
|
||||||
"""
|
"""
|
||||||
ref_re = re.compile(r'([^\d]+)([\?\d]+)')
|
ref_re = re.compile(r'([^\d]+)([\?\d]+)')
|
||||||
|
|
||||||
|
|
@ -969,7 +969,7 @@ class SchematicComponent(object):
|
||||||
self.dfields_bkp = {f.name.lower(): f for f in self.fields_bkp}
|
self.dfields_bkp = {f.name.lower(): f for f in self.fields_bkp}
|
||||||
|
|
||||||
def _solve_ref(self, path):
|
def _solve_ref(self, path):
|
||||||
""" Look fo the correct reference for this path.
|
""" Look for the correct reference for this path.
|
||||||
Returns the default reference if no paths defined.
|
Returns the default reference if no paths defined.
|
||||||
Returns the first not empty reference if the current is empty. """
|
Returns the first not empty reference if the current is empty. """
|
||||||
ref = self.f_ref
|
ref = self.f_ref
|
||||||
|
|
@ -1764,7 +1764,7 @@ class Schematic(object):
|
||||||
self.sub_sheets[c].save(file, dest_dir)
|
self.sub_sheets[c].save(file, dest_dir)
|
||||||
|
|
||||||
def save_variant(self, dest_dir):
|
def save_variant(self, dest_dir):
|
||||||
# Currently imposible
|
# Currently impossible
|
||||||
# if not os.path.exists(dest_dir):
|
# if not os.path.exists(dest_dir):
|
||||||
# os.makedirs(dest_dir)
|
# os.makedirs(dest_dir)
|
||||||
lib_yes = os.path.join(dest_dir, 'y.lib')
|
lib_yes = os.path.join(dest_dir, 'y.lib')
|
||||||
|
|
|
||||||
|
|
@ -66,7 +66,7 @@ def _load_actions(path, load_internals=False):
|
||||||
|
|
||||||
|
|
||||||
def load_actions():
|
def load_actions():
|
||||||
""" Load all the available ouputs and preflights """
|
""" Load all the available outputs and preflights """
|
||||||
global actions_loaded
|
global actions_loaded
|
||||||
if actions_loaded:
|
if actions_loaded:
|
||||||
return
|
return
|
||||||
|
|
@ -378,7 +378,7 @@ def run_output(out):
|
||||||
def generate_outputs(outputs, target, invert, skip_pre, cli_order):
|
def generate_outputs(outputs, target, invert, skip_pre, cli_order):
|
||||||
logger.debug("Starting outputs for board {}".format(GS.pcb_file))
|
logger.debug("Starting outputs for board {}".format(GS.pcb_file))
|
||||||
preflight_checks(skip_pre)
|
preflight_checks(skip_pre)
|
||||||
# Chek if the preflights pulled options
|
# Check if the preflights pulled options
|
||||||
for out in RegOutput.get_prioritary_outputs():
|
for out in RegOutput.get_prioritary_outputs():
|
||||||
config_output(out)
|
config_output(out)
|
||||||
logger.info('- '+str(out))
|
logger.info('- '+str(out))
|
||||||
|
|
@ -448,9 +448,9 @@ def gen_global_targets(f, pre_targets, out_targets, type):
|
||||||
|
|
||||||
def get_pre_targets(targets, dependencies, is_pre):
|
def get_pre_targets(targets, dependencies, is_pre):
|
||||||
pcb_targets = sch_targets = ''
|
pcb_targets = sch_targets = ''
|
||||||
pres = BasePreFlight.get_in_use_objs()
|
prefs = BasePreFlight.get_in_use_objs()
|
||||||
try:
|
try:
|
||||||
for pre in pres:
|
for pre in prefs:
|
||||||
tg = pre.get_targets()
|
tg = pre.get_targets()
|
||||||
if not tg:
|
if not tg:
|
||||||
continue
|
continue
|
||||||
|
|
|
||||||
|
|
@ -7,7 +7,7 @@
|
||||||
"""
|
"""
|
||||||
Log module
|
Log module
|
||||||
|
|
||||||
Handles logging initialization and formating.
|
Handles logging initialization and formatting.
|
||||||
"""
|
"""
|
||||||
import sys
|
import sys
|
||||||
import logging
|
import logging
|
||||||
|
|
|
||||||
|
|
@ -246,7 +246,7 @@ class BaseMacroExpander(NodeTransformer):
|
||||||
# Resolve macro binding.
|
# Resolve macro binding.
|
||||||
macro = self.isbound(macroname)
|
macro = self.isbound(macroname)
|
||||||
if not macro: # pragma: no cover
|
if not macro: # pragma: no cover
|
||||||
raise MacroApplicationError(f"{loc}\nin {syntax} macro invocation for '{macroname}': the name '{macroname}' is not bound to a macro.")
|
raise MacroApplicationError(f"{loc}\n""in {syntax} macro invocation for '{macroname}': the name '{macroname}' is not bound to a macro.")
|
||||||
|
|
||||||
# Expand the macro.
|
# Expand the macro.
|
||||||
expansion = self._apply_macro(macro, tree, kw, macroname, target)
|
expansion = self._apply_macro(macro, tree, kw, macroname, target)
|
||||||
|
|
@ -272,7 +272,7 @@ class BaseMacroExpander(NodeTransformer):
|
||||||
|
|
||||||
# If something went wrong, generate a standardized macro use site report.
|
# If something went wrong, generate a standardized macro use site report.
|
||||||
except Exception as err:
|
except Exception as err:
|
||||||
msg = f"{loc}\nin {syntax} macro invocation for '{macroname}'"
|
msg = f"{loc}\n""in {syntax} macro invocation for '{macroname}'"
|
||||||
if isinstance(err, MacroApplicationError) and err.__cause__:
|
if isinstance(err, MacroApplicationError) and err.__cause__:
|
||||||
# Telescope nested use site reports, by keeping the original
|
# Telescope nested use site reports, by keeping the original
|
||||||
# traceback and `__cause__`, but combining the messages.
|
# traceback and `__cause__`, but combining the messages.
|
||||||
|
|
|
||||||
|
|
@ -263,7 +263,7 @@ def name2make(name):
|
||||||
|
|
||||||
@contextmanager
|
@contextmanager
|
||||||
def hide_stderr():
|
def hide_stderr():
|
||||||
""" Low level stderr supression, used to hide KiCad bugs. """
|
""" Low level stderr suppression, used to hide KiCad bugs. """
|
||||||
newstderr = os.dup(2)
|
newstderr = os.dup(2)
|
||||||
devnull = os.open('/dev/null', os.O_WRONLY)
|
devnull = os.open('/dev/null', os.O_WRONLY)
|
||||||
os.dup2(devnull, 2)
|
os.dup2(devnull, 2)
|
||||||
|
|
|
||||||
|
|
@ -71,7 +71,7 @@ class BaseOutput(RegOutput):
|
||||||
def get_targets(self, out_dir):
|
def get_targets(self, out_dir):
|
||||||
""" Returns a list of targets generated by this output """
|
""" Returns a list of targets generated by this output """
|
||||||
if not (hasattr(self, "options") and hasattr(self.options, "get_targets")):
|
if not (hasattr(self, "options") and hasattr(self.options, "get_targets")):
|
||||||
logger.error("Output {} doesn't implement get_targets(), plese report it".format(self))
|
logger.error("Output {} doesn't implement get_targets(), please report it".format(self))
|
||||||
return []
|
return []
|
||||||
return self.options.get_targets(out_dir)
|
return self.options.get_targets(out_dir)
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -281,7 +281,7 @@ class Aggregate(Optionable):
|
||||||
self.ref_id = ''
|
self.ref_id = ''
|
||||||
""" A prefix to add to all the references from this project """
|
""" A prefix to add to all the references from this project """
|
||||||
self.number = 1
|
self.number = 1
|
||||||
""" Number of boards to build (components multiplier). Use negative to substract """
|
""" Number of boards to build (components multiplier). Use negative to subtract """
|
||||||
|
|
||||||
def config(self, parent):
|
def config(self, parent):
|
||||||
super().config(parent)
|
super().config(parent)
|
||||||
|
|
|
||||||
|
|
@ -20,7 +20,7 @@ class GerberOptions(AnyLayerOptions):
|
||||||
self.line_width = 0.1
|
self.line_width = 0.1
|
||||||
""" [0.02,2] Line_width for objects without width [mm] (KiCad 5) """
|
""" [0.02,2] Line_width for objects without width [mm] (KiCad 5) """
|
||||||
self.subtract_mask_from_silk = False
|
self.subtract_mask_from_silk = False
|
||||||
""" Substract the solder mask from the silk screen """
|
""" Subtract the solder mask from the silk screen """
|
||||||
self.use_protel_extensions = False
|
self.use_protel_extensions = False
|
||||||
""" Use legacy Protel file extensions """
|
""" Use legacy Protel file extensions """
|
||||||
self._gerber_precision = 4.6
|
self._gerber_precision = 4.6
|
||||||
|
|
|
||||||
|
|
@ -0,0 +1,111 @@
|
||||||
|
# -*- coding: utf-8 -*-
|
||||||
|
# Copyright (c) 2020 Salvador E. Tropea
|
||||||
|
# Copyright (c) 2020 Instituto Nacional de Tecnología Industrial
|
||||||
|
# Copyright (c) 2018 John Beard
|
||||||
|
# License: GPL-3.0
|
||||||
|
# Project: KiBot (formerly KiPlot)
|
||||||
|
# Adapted from: https://github.com/johnbeard/kiplot
|
||||||
|
from pcbnew import (PLOT_FORMAT_GERBER, FromMM, ToMM, DIM_UNITS_MODE_MILLIMETRES, DIM_UNITS_MODE_INCHES, DIM_UNITS_MODE_AUTOMATIC)
|
||||||
|
from .gs import GS
|
||||||
|
from .out_any_layer import (AnyLayer, AnyLayerOptions)
|
||||||
|
from .error import KiPlotConfigurationError
|
||||||
|
from .macros import macros, document, output_class # noqa: F401
|
||||||
|
from . import log
|
||||||
|
|
||||||
|
logger = log.get_logger()
|
||||||
|
|
||||||
|
|
||||||
|
class GerberOptions(AnyLayerOptions):
|
||||||
|
def __init__(self):
|
||||||
|
with document:
|
||||||
|
self.use_aux_axis_as_origin = False
|
||||||
|
""" Use the auxiliary axis as origin for coordinates """
|
||||||
|
self.line_width = 0.1
|
||||||
|
""" [0.02,2] Line_width for objects without width [mm] (KiCad 5) """
|
||||||
|
self.subtract_mask_from_silk = False
|
||||||
|
""" Substract the solder mask from the silk screen """
|
||||||
|
self.use_protel_extensions = False
|
||||||
|
""" Use legacy Protel file extensions """
|
||||||
|
self._gerber_precision = 4.6
|
||||||
|
""" This the gerber coordinate format, can be 4.5 or 4.6 """
|
||||||
|
self.create_gerber_job_file = True
|
||||||
|
""" Creates a file with information about all the generated gerbers.
|
||||||
|
You can use it in gerbview to load all gerbers at once """
|
||||||
|
self.gerber_job_file = GS.def_global_output
|
||||||
|
""" Name for the gerber job file (%i='job', %x='gbrjob') """
|
||||||
|
self.use_gerber_x2_attributes = True
|
||||||
|
""" Use the extended X2 format (otherwise use X1 formerly RS-274X) """
|
||||||
|
self.use_gerber_net_attributes = True
|
||||||
|
""" Include netlist metadata """
|
||||||
|
self.disable_aperture_macros = False
|
||||||
|
""" Disable aperture macros (workaround for buggy CAM software) (KiCad 6) """
|
||||||
|
super().__init__()
|
||||||
|
self._plot_format = PLOT_FORMAT_GERBER
|
||||||
|
if GS.global_output is not None:
|
||||||
|
self.gerber_job_file = GS.global_output
|
||||||
|
|
||||||
|
@property
|
||||||
|
def gerber_precision(self):
|
||||||
|
return self._gerber_precision
|
||||||
|
|
||||||
|
@gerber_precision.setter
|
||||||
|
def gerber_precision(self, val):
|
||||||
|
if val != 4.5 and val != 4.6:
|
||||||
|
raise KiPlotConfigurationError("`gerber_precision` must be 4.5 or 4.6")
|
||||||
|
self._gerber_precision = val
|
||||||
|
|
||||||
|
def _configure_plot_ctrl(self, po, output_dir):
|
||||||
|
super()._configure_plot_ctrl(po, output_dir)
|
||||||
|
po.SetSubtractMaskFromSilk(self.subtract_mask_from_silk)
|
||||||
|
po.SetUseGerberProtelExtensions(self.use_protel_extensions)
|
||||||
|
po.SetGerberPrecision(5 if self.gerber_precision == 4.5 else 6)
|
||||||
|
po.SetCreateGerberJobFile(self.create_gerber_job_file)
|
||||||
|
po.SetUseGerberX2format(self.use_gerber_x2_attributes)
|
||||||
|
po.SetIncludeGerberNetlistInfo(self.use_gerber_net_attributes)
|
||||||
|
po.SetUseAuxOrigin(self.use_aux_axis_as_origin)
|
||||||
|
po.SetDrillMarksType(0)
|
||||||
|
if GS.ki5():
|
||||||
|
po.SetLineWidth(FromMM(self.line_width))
|
||||||
|
else:
|
||||||
|
po.SetDisableGerberMacros(self.disable_aperture_macros) # pragma: no cover (Ki6)
|
||||||
|
ds = GS.board.GetDesignSettings()
|
||||||
|
logger.error(ds.m_DimensionUnitsMode)
|
||||||
|
ds.m_DimensionUnitsMode = DIM_UNITS_MODE_MILLIMETRES
|
||||||
|
logger.error(ds.m_DimensionUnitsMode)
|
||||||
|
logger.error(DIM_UNITS_MODE_AUTOMATIC)
|
||||||
|
po.gerber_job_file = self.gerber_job_file
|
||||||
|
|
||||||
|
def read_vals_from_po(self, po):
|
||||||
|
super().read_vals_from_po(po)
|
||||||
|
# usegerberattributes
|
||||||
|
self.use_gerber_x2_attributes = po.GetUseGerberX2format()
|
||||||
|
# usegerberextensions
|
||||||
|
self.use_protel_extensions = po.GetUseGerberProtelExtensions()
|
||||||
|
# usegerberadvancedattributes
|
||||||
|
self.use_gerber_net_attributes = po.GetIncludeGerberNetlistInfo()
|
||||||
|
# creategerberjobfile
|
||||||
|
self.create_gerber_job_file = po.GetCreateGerberJobFile()
|
||||||
|
# gerberprecision
|
||||||
|
self.gerber_precision = 4.0 + po.GetGerberPrecision()/10.0
|
||||||
|
# subtractmaskfromsilk
|
||||||
|
self.subtract_mask_from_silk = po.GetSubtractMaskFromSilk()
|
||||||
|
# useauxorigin
|
||||||
|
self.use_aux_axis_as_origin = po.GetUseAuxOrigin()
|
||||||
|
if GS.ki5():
|
||||||
|
# linewidth
|
||||||
|
self.line_width = ToMM(po.GetLineWidth())
|
||||||
|
else:
|
||||||
|
# disableapertmacros
|
||||||
|
self.disable_aperture_macros = po.GetDisableGerberMacros() # pragma: no cover (Ki6)
|
||||||
|
|
||||||
|
|
||||||
|
@output_class
|
||||||
|
class Gerber(AnyLayer):
|
||||||
|
""" Gerber format
|
||||||
|
This is the main fabrication format for the PCB.
|
||||||
|
This output is what you get from the File/Plot menu in pcbnew. """
|
||||||
|
def __init__(self):
|
||||||
|
super().__init__()
|
||||||
|
with document:
|
||||||
|
self.options = GerberOptions
|
||||||
|
""" [dict] Options for the `gerber` output """
|
||||||
|
|
@ -45,7 +45,7 @@ class PDFOptions(DrillMarks):
|
||||||
@output_class
|
@output_class
|
||||||
class PDF(AnyLayer, DrillMarks):
|
class PDF(AnyLayer, DrillMarks):
|
||||||
""" PDF (Portable Document Format)
|
""" PDF (Portable Document Format)
|
||||||
Exports the PCB to the most common exhange format. Suitable for printing.
|
Exports the PCB to the most common exchange format. Suitable for printing.
|
||||||
Note that this output isn't the best for documating your project.
|
Note that this output isn't the best for documating your project.
|
||||||
This output is what you get from the File/Plot menu in pcbnew. """
|
This output is what you get from the File/Plot menu in pcbnew. """
|
||||||
def __init__(self):
|
def __init__(self):
|
||||||
|
|
|
||||||
|
|
@ -149,7 +149,7 @@ class PDF_Pcb_PrintOptions(VariantOptions):
|
||||||
@output_class
|
@output_class
|
||||||
class PDF_Pcb_Print(BaseOutput): # noqa: F821
|
class PDF_Pcb_Print(BaseOutput): # noqa: F821
|
||||||
""" PDF PCB Print (Portable Document Format)
|
""" PDF PCB Print (Portable Document Format)
|
||||||
Exports the PCB to the most common exhange format. Suitable for printing.
|
Exports the PCB to the most common exchange format. Suitable for printing.
|
||||||
This is the main format to document your PCB.
|
This is the main format to document your PCB.
|
||||||
This output is what you get from the 'File/Print' menu in pcbnew. """
|
This output is what you get from the 'File/Print' menu in pcbnew. """
|
||||||
def __init__(self):
|
def __init__(self):
|
||||||
|
|
|
||||||
|
|
@ -89,7 +89,7 @@ class PDF_Sch_PrintOptions(VariantOptions):
|
||||||
@output_class
|
@output_class
|
||||||
class PDF_Sch_Print(BaseOutput): # noqa: F821
|
class PDF_Sch_Print(BaseOutput): # noqa: F821
|
||||||
""" PDF Schematic Print (Portable Document Format)
|
""" PDF Schematic Print (Portable Document Format)
|
||||||
Exports the PCB to the most common exhange format. Suitable for printing.
|
Exports the PCB to the most common exchange format. Suitable for printing.
|
||||||
This is the main format to document your schematic.
|
This is the main format to document your schematic.
|
||||||
This output is what you get from the 'File/Print' menu in eeschema. """
|
This output is what you get from the 'File/Print' menu in eeschema. """
|
||||||
def __init__(self):
|
def __init__(self):
|
||||||
|
|
|
||||||
|
|
@ -40,7 +40,7 @@ class PosColumns(Optionable):
|
||||||
self.id = ''
|
self.id = ''
|
||||||
""" [Ref,Val,Package,PosX,PosY,Rot,Side] Internal name """
|
""" [Ref,Val,Package,PosX,PosY,Rot,Side] Internal name """
|
||||||
self.name = ''
|
self.name = ''
|
||||||
""" Name to use in the outut file. The id is used when empty """
|
""" Name to use in the output file. The id is used when empty """
|
||||||
self._id_example = 'Ref'
|
self._id_example = 'Ref'
|
||||||
self._name_example = 'Reference'
|
self._name_example = 'Reference'
|
||||||
|
|
||||||
|
|
@ -129,17 +129,17 @@ class PositionOptions(VariantOptions):
|
||||||
maxSizes[0] = maxSizes[0] + 2
|
maxSizes[0] = maxSizes[0] + 2
|
||||||
|
|
||||||
for m in modulesStr:
|
for m in modulesStr:
|
||||||
fle = bothf
|
file = bothf
|
||||||
if fle is None:
|
if file is None:
|
||||||
if m[-1] == "top":
|
if m[-1] == "top":
|
||||||
fle = topf
|
file = topf
|
||||||
else:
|
else:
|
||||||
fle = botf
|
file = botf
|
||||||
for idx, col in enumerate(m):
|
for idx, col in enumerate(m):
|
||||||
if idx > 0:
|
if idx > 0:
|
||||||
fle.write(" ")
|
file.write(" ")
|
||||||
fle.write("{0: <{width}}".format(col, width=maxSizes[idx]))
|
file.write("{0: <{width}}".format(col, width=maxSizes[idx]))
|
||||||
fle.write("\n")
|
file.write("\n")
|
||||||
|
|
||||||
for f in files:
|
for f in files:
|
||||||
f.write("## End\n")
|
f.write("## End\n")
|
||||||
|
|
@ -168,14 +168,14 @@ class PositionOptions(VariantOptions):
|
||||||
f.write("\n")
|
f.write("\n")
|
||||||
|
|
||||||
for m in modulesStr:
|
for m in modulesStr:
|
||||||
fle = bothf
|
file = bothf
|
||||||
if fle is None:
|
if file is None:
|
||||||
if m[-1] == "top":
|
if m[-1] == "top":
|
||||||
fle = topf
|
file = topf
|
||||||
else:
|
else:
|
||||||
fle = botf
|
file = botf
|
||||||
fle.write(",".join('{}'.format(e) for e in m))
|
file.write(",".join('{}'.format(e) for e in m))
|
||||||
fle.write("\n")
|
file.write("\n")
|
||||||
|
|
||||||
if topf is not None:
|
if topf is not None:
|
||||||
topf.close()
|
topf.close()
|
||||||
|
|
@ -227,7 +227,7 @@ class PositionOptions(VariantOptions):
|
||||||
y_origin = 0.0
|
y_origin = 0.0
|
||||||
if self.use_aux_axis_as_origin:
|
if self.use_aux_axis_as_origin:
|
||||||
(x_origin, y_origin) = GS.get_aux_origin()
|
(x_origin, y_origin) = GS.get_aux_origin()
|
||||||
logger.debug('Using auxiliar origin: x={} y={}'.format(x_origin, y_origin))
|
logger.debug('Using auxiliary origin: x={} y={}'.format(x_origin, y_origin))
|
||||||
for m in sorted(GS.get_modules(), key=lambda c: _ref_key(c.GetReference())):
|
for m in sorted(GS.get_modules(), key=lambda c: _ref_key(c.GetReference())):
|
||||||
ref = m.GetReference()
|
ref = m.GetReference()
|
||||||
logger.debug('P&P ref: {}'.format(ref))
|
logger.debug('P&P ref: {}'.format(ref))
|
||||||
|
|
@ -294,7 +294,7 @@ class PositionOptions(VariantOptions):
|
||||||
class Position(BaseOutput): # noqa: F821
|
class Position(BaseOutput): # noqa: F821
|
||||||
""" Pick & place
|
""" Pick & place
|
||||||
Generates the file with position information for the PCB components, used by the pick and place machine.
|
Generates the file with position information for the PCB components, used by the pick and place machine.
|
||||||
This output is what you get from the 'File/Fabrication output/Footprint poistion (.pos) file' menu in pcbnew. """
|
This output is what you get from the 'File/Fabrication output/Footprint position (.pos) file' menu in pcbnew. """
|
||||||
def __init__(self):
|
def __init__(self):
|
||||||
super().__init__()
|
super().__init__()
|
||||||
with document:
|
with document:
|
||||||
|
|
|
||||||
|
|
@ -44,7 +44,7 @@ class RegOutput(Optionable, Registrable):
|
||||||
_def_variants = {}
|
_def_variants = {}
|
||||||
# List of defined outputs
|
# List of defined outputs
|
||||||
_def_outputs = OrderedDict()
|
_def_outputs = OrderedDict()
|
||||||
# List of prioritary outputs
|
# List of priority outputs
|
||||||
_prio_outputs = OrderedDict()
|
_prio_outputs = OrderedDict()
|
||||||
|
|
||||||
def __init__(self):
|
def __init__(self):
|
||||||
|
|
|
||||||
|
|
@ -40,7 +40,7 @@ class BaseVariant(RegVariant):
|
||||||
Use '_kibom_dnc' for the default KiBoM behavior """
|
Use '_kibom_dnc' for the default KiBoM behavior """
|
||||||
|
|
||||||
def get_variant_field(self):
|
def get_variant_field(self):
|
||||||
''' Returns the name of the field used to determine if the component belongs to teh variant '''
|
''' Returns the name of the field used to determine if the component belongs to the variant '''
|
||||||
return None
|
return None
|
||||||
|
|
||||||
def filter(self, comps):
|
def filter(self, comps):
|
||||||
|
|
|
||||||
|
|
@ -33,7 +33,7 @@ class IBoM(BaseVariant): # noqa: F821
|
||||||
""" [string|list(string)=''] List of board variants to include in the BOM """
|
""" [string|list(string)=''] List of board variants to include in the BOM """
|
||||||
|
|
||||||
def get_variant_field(self):
|
def get_variant_field(self):
|
||||||
''' Returns the name of the field used to determine if the component belongs to teh variant '''
|
''' Returns the name of the field used to determine if the component belongs to the variant '''
|
||||||
return self.variant_field
|
return self.variant_field
|
||||||
|
|
||||||
def config(self, parent):
|
def config(self, parent):
|
||||||
|
|
|
||||||
|
|
@ -29,7 +29,7 @@ class KiBoM(BaseVariant): # noqa: F821
|
||||||
self._def_dnc_filter = None
|
self._def_dnc_filter = None
|
||||||
with document:
|
with document:
|
||||||
self.config_field = 'Config'
|
self.config_field = 'Config'
|
||||||
""" Name of the field used to clasify components """
|
""" Name of the field used to classify components """
|
||||||
self.variant = Optionable
|
self.variant = Optionable
|
||||||
""" [string|list(string)=''] Board variant(s) """
|
""" [string|list(string)=''] Board variant(s) """
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -39,7 +39,7 @@ class KiCost(BaseVariant): # noqa: F821
|
||||||
Only supported internally, don't use it if you plan to use KiCost """
|
Only supported internally, don't use it if you plan to use KiCost """
|
||||||
|
|
||||||
def get_variant_field(self):
|
def get_variant_field(self):
|
||||||
''' Returns the name of the field used to determine if the component belongs to teh variant '''
|
''' Returns the name of the field used to determine if the component belongs to the variant '''
|
||||||
return self.variant_field
|
return self.variant_field
|
||||||
|
|
||||||
def config(self, parent):
|
def config(self, parent):
|
||||||
|
|
|
||||||
|
|
@ -1450,7 +1450,7 @@ def test_int_bom_variant_t3(test_dir):
|
||||||
|
|
||||||
|
|
||||||
def test_int_bom_variant_cli(test_dir):
|
def test_int_bom_variant_cli(test_dir):
|
||||||
""" Assing t1_v1 to default from cli. Make sure t1_v3 isn't affected """
|
""" Assign t1_v1 to default from cli. Make sure t1_v3 isn't affected """
|
||||||
prj = 'kibom-variante'
|
prj = 'kibom-variante'
|
||||||
ctx = context.TestContextSCH(test_dir, 'test_int_bom_variant_cli', prj, 'int_bom_var_t1_cli', BOM_DIR)
|
ctx = context.TestContextSCH(test_dir, 'test_int_bom_variant_cli', prj, 'int_bom_var_t1_cli', BOM_DIR)
|
||||||
ctx.run(extra=['--global-redef', 'variant=t1_v1'])
|
ctx.run(extra=['--global-redef', 'variant=t1_v1'])
|
||||||
|
|
@ -1471,7 +1471,7 @@ def test_int_bom_variant_cli(test_dir):
|
||||||
|
|
||||||
|
|
||||||
def test_int_bom_variant_glb(test_dir):
|
def test_int_bom_variant_glb(test_dir):
|
||||||
""" Assing t1_v1 to default from global. Make sure t1_v3 isn't affected """
|
""" Assign t1_v1 to default from global. Make sure t1_v3 isn't affected """
|
||||||
prj = 'kibom-variante'
|
prj = 'kibom-variante'
|
||||||
ctx = context.TestContextSCH(test_dir, 'test_int_bom_variant_glb', prj, 'int_bom_var_t1_glb', BOM_DIR)
|
ctx = context.TestContextSCH(test_dir, 'test_int_bom_variant_glb', prj, 'int_bom_var_t1_glb', BOM_DIR)
|
||||||
ctx.run()
|
ctx.run()
|
||||||
|
|
@ -1491,7 +1491,7 @@ def test_int_bom_variant_glb(test_dir):
|
||||||
|
|
||||||
|
|
||||||
def test_int_bom_variant_cl_gl(test_dir):
|
def test_int_bom_variant_cl_gl(test_dir):
|
||||||
""" Assing t1_v1 to default from global.
|
""" Assign t1_v1 to default from global.
|
||||||
Overwrite it from cli to t1_v2.
|
Overwrite it from cli to t1_v2.
|
||||||
Make sure t1_v3 isn't affected """
|
Make sure t1_v3 isn't affected """
|
||||||
prj = 'kibom-variante'
|
prj = 'kibom-variante'
|
||||||
|
|
|
||||||
|
|
@ -155,7 +155,7 @@ def test_no_get_targets(caplog):
|
||||||
test.get_targets('')
|
test.get_targets('')
|
||||||
files = test.get_dependencies()
|
files = test.get_dependencies()
|
||||||
files_pre = test_pre.get_dependencies()
|
files_pre = test_pre.get_dependencies()
|
||||||
assert "Output 'Fake' (dummy) [none] doesn't implement get_targets(), plese report it" in caplog.text
|
assert "Output 'Fake' (dummy) [none] doesn't implement get_targets(), please report it" in caplog.text
|
||||||
assert files == [GS.sch_file]
|
assert files == [GS.sch_file]
|
||||||
assert files_pre == [GS.sch_file]
|
assert files_pre == [GS.sch_file]
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -178,7 +178,7 @@ def test_sch_missing_filtered(test_dir):
|
||||||
|
|
||||||
|
|
||||||
def test_sch_bizarre_cases(test_dir):
|
def test_sch_bizarre_cases(test_dir):
|
||||||
""" Poligon without points.
|
""" Polygon without points.
|
||||||
Pin with unknown direction. """
|
Pin with unknown direction. """
|
||||||
if not context.ki5():
|
if not context.ki5():
|
||||||
# This is very KiCad 5 loader specific
|
# This is very KiCad 5 loader specific
|
||||||
|
|
|
||||||
|
|
@ -77,7 +77,7 @@ def test_sch_errors_l3(test_dir):
|
||||||
def test_sch_errors_l5(test_dir):
|
def test_sch_errors_l5(test_dir):
|
||||||
if context.ki6():
|
if context.ki6():
|
||||||
return
|
return
|
||||||
setup_ctx(test_dir, 'l5', ['Unknown poligon definition', 'Expected 6 coordinates and got 8 in poligon',
|
setup_ctx(test_dir, 'l5', ['Unknown polygon definition', 'Expected 6 coordinates and got 8 in polygon',
|
||||||
'Unknown square definition', 'Unknown circle definition', 'Unknown arc definition',
|
'Unknown square definition', 'Unknown circle definition', 'Unknown arc definition',
|
||||||
'Unknown text definition', 'Unknown pin definition', 'Failed to load component definition',
|
'Unknown text definition', 'Unknown pin definition', 'Failed to load component definition',
|
||||||
'Unknown draw element'])
|
'Unknown draw element'])
|
||||||
|
|
|
||||||
|
|
@ -340,17 +340,17 @@ class TestContext(object):
|
||||||
server = None
|
server = None
|
||||||
else:
|
else:
|
||||||
os.environ['KICOST_KITSPACE_URL'] = 'http://localhost:8000'
|
os.environ['KICOST_KITSPACE_URL'] = 'http://localhost:8000'
|
||||||
fo = open(self.get_out_path('server_stdout.txt'), 'at')
|
f_o = open(self.get_out_path('server_stdout.txt'), 'at')
|
||||||
fe = open(self.get_out_path('server_stderr.txt'), 'at')
|
f_e = open(self.get_out_path('server_stderr.txt'), 'at')
|
||||||
server = subprocess.Popen('./tests/utils/dummy-web-server.py', stdout=fo, stderr=fe)
|
server = subprocess.Popen('./tests/utils/dummy-web-server.py', stdout=f_o, stderr=f_e)
|
||||||
try:
|
try:
|
||||||
self.do_run(cmd, ret_val, use_a_tty, chdir_out)
|
self.do_run(cmd, ret_val, use_a_tty, chdir_out)
|
||||||
finally:
|
finally:
|
||||||
# Always kill the fake web server
|
# Always kill the fake web server
|
||||||
if kicost and server is not None:
|
if kicost and server is not None:
|
||||||
server.terminate()
|
server.terminate()
|
||||||
fo.close()
|
f_o.close()
|
||||||
fe.close()
|
f_e.close()
|
||||||
# Do we need to restore the locale?
|
# Do we need to restore the locale?
|
||||||
if do_locale:
|
if do_locale:
|
||||||
if old_LOCPATH:
|
if old_LOCPATH:
|
||||||
|
|
@ -475,7 +475,7 @@ class TestContext(object):
|
||||||
self.get_out_path(gen),
|
self.get_out_path(gen),
|
||||||
self.get_out_path('gen-%d.png')]
|
self.get_out_path('gen-%d.png')]
|
||||||
subprocess.check_call(cmd)
|
subprocess.check_call(cmd)
|
||||||
# Chek number of pages
|
# Check number of pages
|
||||||
ref_pages = glob(self.get_out_path('ref-*.png'))
|
ref_pages = glob(self.get_out_path('ref-*.png'))
|
||||||
gen_pages = glob(self.get_out_path('gen-*.png'))
|
gen_pages = glob(self.get_out_path('gen-*.png'))
|
||||||
logging.debug('Pages {} vs {}'.format(len(gen_pages), len(ref_pages)))
|
logging.debug('Pages {} vs {}'.format(len(gen_pages), len(ref_pages)))
|
||||||
|
|
|
||||||
|
|
@ -19,7 +19,7 @@ variants:
|
||||||
|
|
||||||
outputs:
|
outputs:
|
||||||
- name: 'bom_internal_subparts'
|
- name: 'bom_internal_subparts'
|
||||||
comment: "Bill of Materials in CSV format, subparts splitted"
|
comment: "Bill of Materials in CSV format, subparts split"
|
||||||
type: bom
|
type: bom
|
||||||
dir: .
|
dir: .
|
||||||
options: &bom_options
|
options: &bom_options
|
||||||
|
|
|
||||||
|
|
@ -97,7 +97,7 @@ variants:
|
||||||
|
|
||||||
outputs:
|
outputs:
|
||||||
- name: 'bom_internal_subparts'
|
- name: 'bom_internal_subparts'
|
||||||
comment: "Bill of Materials in CSV format, subparts splitted"
|
comment: "Bill of Materials in CSV format, subparts split"
|
||||||
type: bom
|
type: bom
|
||||||
dir: .
|
dir: .
|
||||||
options: &bom_options
|
options: &bom_options
|
||||||
|
|
|
||||||
|
|
@ -22,7 +22,7 @@ variants:
|
||||||
|
|
||||||
outputs:
|
outputs:
|
||||||
- name: 'bom_internal_subparts'
|
- name: 'bom_internal_subparts'
|
||||||
comment: "Bill of Materials in CSV format, subparts splitted"
|
comment: "Bill of Materials in CSV format, subparts split"
|
||||||
type: bom
|
type: bom
|
||||||
dir: .
|
dir: .
|
||||||
options: &bom_options
|
options: &bom_options
|
||||||
|
|
|
||||||
Loading…
Reference in New Issue