Metadata-Version: 2.1
Name: dlg-home-content
Version: 0.0.4
Summary: model to identify tv sizes using images
Home-page: https://gitlab.com/fractal/dlg_home_content/tree/master/
Author: Fractal Image Group
Author-email: dle@fractal.ai
License: Apache Software License 2.0
Keywords: some keywords
Platform: UNKNOWN
Classifier: Development Status :: 3 - Alpha
Classifier: Intended Audience :: Developers
Classifier: License :: OSI Approved :: Apache Software License
Classifier: Natural Language :: English
Classifier: Programming Language :: Python :: 3.6
Classifier: Programming Language :: Python :: 3.7
Classifier: Programming Language :: Python :: 3.8
Requires-Python: >=3.6
Description-Content-Type: text/markdown
Requires-Dist: fastcore (==1.0.0)
Requires-Dist: nbformat (>=4.4.0)
Requires-Dist: nbconvert (>=5.6.1)
Requires-Dist: pyyaml
Requires-Dist: fastscript (>=0.1.5)
Requires-Dist: absl-py (>=0.10.0)
Requires-Dist: attrs (==20.1.0)
Requires-Dist: backcall (==0.2.0)
Requires-Dist: bleach (==3.1.5)
Requires-Dist: cachetools (==4.1.1)
Requires-Dist: certifi (==2020.6.20)
Requires-Dist: chardet (==3.0.4)
Requires-Dist: cloudpickle (==1.6.0)
Requires-Dist: cycler (==0.10.0)
Requires-Dist: Cython (==0.29.21)
Requires-Dist: dataclasses (==0.6)
Requires-Dist: decorator (==4.4.2)
Requires-Dist: defusedxml (==0.6.0)
Requires-Dist: entrypoints (==0.3)
Requires-Dist: future (==0.18.2)
Requires-Dist: fvcore (==0.1.1.post20200716)
Requires-Dist: google-auth (==1.21.0)
Requires-Dist: google-auth-oauthlib (==0.4.1)
Requires-Dist: grpcio (==1.31.0)
Requires-Dist: idna (==2.10)
Requires-Dist: imageio (==2.9.0)
Requires-Dist: ipykernel (==5.3.4)
Requires-Dist: ipython
Requires-Dist: ipython-genutils (==0.2.0)
Requires-Dist: jedi (==0.17.2)
Requires-Dist: Jinja2 (==2.11.2)
Requires-Dist: jsonschema (==3.2.0)
Requires-Dist: jupyter-client (==6.1.7)
Requires-Dist: jupyter-core (==4.6.3)
Requires-Dist: kiwisolver (==1.2.0)
Requires-Dist: Markdown (==3.2.2)
Requires-Dist: MarkupSafe (==1.1.1)
Requires-Dist: matplotlib (==3.3.1)
Requires-Dist: mistune (==0.8.4)
Requires-Dist: mock (==4.0.2)
Requires-Dist: nbconvert (==5.6.1)
Requires-Dist: nbdev (==1.0.10)
Requires-Dist: nbformat (==5.0.7)
Requires-Dist: networkx (==2.5)
Requires-Dist: numpy (==1.19.1)
Requires-Dist: oauthlib (==3.1.0)
Requires-Dist: opencv-python (==4.4.0.42)
Requires-Dist: packaging (==20.4)
Requires-Dist: pandas (==1.1.1)
Requires-Dist: pandocfilters (==1.4.2)
Requires-Dist: parso (==0.7.1)
Requires-Dist: pexpect (==4.8.0)
Requires-Dist: pickleshare (==0.7.5)
Requires-Dist: Pillow (==7.2.0)
Requires-Dist: portalocker (==2.0.0)
Requires-Dist: prompt-toolkit (==3.0.7)
Requires-Dist: protobuf (==3.13.0)
Requires-Dist: ptyprocess (==0.6.0)
Requires-Dist: pyasn1 (==0.4.8)
Requires-Dist: pyasn1-modules (==0.2.8)
Requires-Dist: pycocotools (==2.0.2)
Requires-Dist: pydot (==1.4.1)
Requires-Dist: Pygments (==2.6.1)
Requires-Dist: pyparsing (==2.4.7)
Requires-Dist: pyrsistent (==0.16.0)
Requires-Dist: python-dateutil (==2.8.1)
Requires-Dist: pytz (==2020.1)
Requires-Dist: PyWavelets (==1.1.1)
Requires-Dist: pyzmq (==19.0.2)
Requires-Dist: requests (==2.24.0)
Requires-Dist: requests-oauthlib (==1.3.0)
Requires-Dist: rsa (==4.6)
Requires-Dist: scikit-image (==0.17.2)
Requires-Dist: scipy (==1.5.2)
Requires-Dist: six (==1.15.0)
Requires-Dist: tabulate (==0.8.7)
Requires-Dist: tensorboard (==2.3.0)
Requires-Dist: tensorboard-plugin-wit (==1.7.0)
Requires-Dist: termcolor (==1.1.0)
Requires-Dist: testpath (==0.4.4)
Requires-Dist: tifffile (==2020.8.25)
Requires-Dist: torch
Requires-Dist: torchvision
Requires-Dist: tornado (==6.0.4)
Requires-Dist: tqdm (==4.48.2)
Requires-Dist: urllib3 (==1.25.10)
Requires-Dist: wcwidth (==0.2.5)
Requires-Dist: webencodings (==0.5.1)
Requires-Dist: Werkzeug (==1.0.1)
Requires-Dist: xlrd (==1.2.0)
Requires-Dist: yacs (==0.1.8)

# dlg-home-content

## setup environment

- `conda env create -f environment.yml`
- install detectron2 from source

- cpu version  

```bash
> conda install pytorch torchvision cpuonly -c pytorch
> python -m pip install detectron2 -f \
  https://dl.fbaipublicfiles.com/detectron2/wheels/cpu/torch1.6/index.html
> update environment `conda env update --file environment.yml`
```

- for other gpu versions, use [this](https://github.com/facebookresearch/detectron2/blob/master/INSTALL.md)

## CLI commands available

- convert labelme2coco

```bash
labelme2coco --labelme_json_location 'data/processed_tv_annotations_v1/' --labels_loc "assets/keypoints.yml" --save_json "data/keypoints/" --train_ratio 0.9 --seed 50
```

- train using custom dataset

We need to define three config files

- base cfg file name available on detectron. check `detectron/configs` for examples.
- cfg file which contains modified params . check `configs` folder for specific examples
- data_cfg which has dataset and keypoints related params. For example `assets/datasets.yml`

```bash
# normal instance segmentation
custom_train --base_cfg 'COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_3x.yaml' --cfg 'configs/mask_only_exp1.yml' --data_cfg "assets/datasets.yml"

# instance segmentation with keypoints
custom_train --base_cfg 'COCO-Keypoints/keypoint_rcnn_R_50_FPN_3x.yaml' --cfg 'configs/keypoint_mask_on_exp1.yml' --data_cfg "assets/datasets.yml"
```

## Inference

### LOGO Detection

Download latest inference file from [here](https://fractalanalytic-my.sharepoint.com/:u:/r/personal/sindhura_k_fractal_ai/Documents/TV_indentification/logo_detection_weight_files/logo_detection_v2.pth?csf=1&web=1&e=nOtvzp)

```python
from dlg_home_content.tv_detection import InferLogo
config = '../assets/e2e_infer.yml
model = InferLogo(config)
model.predict(img_loc, visualize=True)
```


### Inference for Keypoint Detetion

Download weight files and config files from [here] (https://fractalanalytic-my.sharepoint.com/:u:/g/personal/sindhura_k_fractal_ai/EXCaFSHWv3hMo99lvfP4zKIBLBO8dlnWzY7iUAFWYiXHKA?e=23XheZ)

```bash
#for inner keyoint detection
from dlg_home_content.inference_pipeline import KeypointInference

config = '../assets/e2e_infer.yml'
#kp_type in ['kp_inner_edge','kp_outer_edge','kp_sticky_note']
model_inner = KeypointInference(config, kp_type='kp_inner_edge')
predicted_keyoints = model_inner.predict_keypoints(img_loc, visualize=True)
```

### End-to-End Inference pipeline

```python
from dlg_home_content.e2e_inference import E2EInference
config = '../assets/e2e_infer.yml'
final_pipeline = E2EInference(config)
result = final_pipeline.infer(img_loc, 8, 8, True)
```



