text
stringlengths 256
65.5k
|
|---|
Hello!
I used to draw a rectangle of a certain color and thickness in PDF Viewer using annotations (with the JavaScript function addAnnot()).
Could I simply draw a rectangle with any function of the PDF-Tools Library or should I also create an Annotation with the PXCp_Add3DAnnotationW() function? The problem is I'm trying to use only the PDF-Tools in order to manipulate a PDF-Document.
Thanks for any answers!
I used to draw a rectangle of a certain color and thickness in PDF Viewer using annotations (with the JavaScript function addAnnot()).
Could I simply draw a rectangle with any function of the PDF-Tools Library or should I also create an Annotation with the PXCp_Add3DAnnotationW() function? The problem is I'm trying to use only the PDF-Tools in order to manipulate a PDF-Document.
Thanks for any answers!
Hi!
I have some questions on the PXCp_AddLineAnnotationW function:
1. The last parameter of the function is a pointer to a PXC_CommonAnnotInfo structure. It expects among other things an integer value for a color. Here is an extract from the docs with a pseudo code: I couldn't find any equivalent of that function in WPF. How is that value calculated?
This is an extract from my code:I didn't define the values for AnnotInfo.m_Border.m_DashArray. No annotation is created. I tested it with JavaScript command this.getAnnots(0); It returns null.
Thanks!
I have some questions on the PXCp_AddLineAnnotationW function:
1. The last parameter of the function is a pointer to a PXC_CommonAnnotInfo structure. It expects among other things an integer value for a color. Here is an extract from the docs with a pseudo code:
Code: Select all
AnnotInfo.m_Color = RGB(200, 0, 100);
2. A comprehension question: should I draw four single lines in order to create a rectangle or can I directly draw a rectangle with that function (with the parameter LPCPXC_RectF rect that specifies the bounding rectangle of the annotation)?RGB(255, 255, 255) = 16777215 ???
This is an extract from my code:
Code: Select all
var borderRect = new PdfXchangePro.PXC_RectF { left = selection.Left,
right = selection.Right,
top = selection.Top,
bottom = selection.Bottom };
int color = 16777215; // RGB(255, 255, 255) ???
var border = new PdfXchangePro.PXC_AnnotBorder { m_Width = StrToDouble(BorderThickness),
m_Type = PdfXchangePro.PXC_AnnotBorderStyle.ABS_Solid };
var borderInfo = new PdfXchangePro.PXC_CommonAnnotInfo{ m_Color = color,
m_Flags = Convert.ToInt32(PdfXchangePro.PXC_AnnotsFlags.AF_ReadOnly),
m_Opacity = _opacity,
m_Border = border };
var startPoint = new PdfXchangePro.PXC_PointF {x = selection.Left, y = selection.Top};
var endPoint = new PdfXchangePro.PXC_PointF {x = selection.Right, y = selection.Bottom};
int retval = PdfXchangePro.PXCp_AddLineAnnotationW(_handle,
0,
ref borderRect,
"xy",
"yx",
ref startPoint,
ref endPoint,
PdfXchangePro.PXC_LineAnnotsType.LAType_None,
PdfXchangePro.PXC_LineAnnotsType.LAType_None,
color,
ref borderInfo); // function returns 0
Thanks!
Site Admin
Posts:3632
Joined:Thu Jul 08, 2004 10:36 pm
Location:Vancouver Island - Canada
Contact:
Can you send me PDF generated by your code ?
P.S. RGB is 'macro' is equivalent to the following function
P.S. RGB is 'macro' is equivalent to the following function
Code: Select all
// r, g, and b in range from 0 to 255
ULONG _RGB(int r, int g, int b)
{
return (ULONG)(r + g * 256 + b * 65536);
}
Tracker Software (Project Director)
When attaching files to any message - please ensure they are archived and posted as a .ZIP, .RAR or .7z format - or they will not be posted - thanks.
When attaching files to any message - please ensure they are archived and posted as a .ZIP, .RAR or .7z format - or they will not be posted - thanks.
I've got it! I had to close the document in the PDF viewer before creating a line annotation with PDF Tools Library function PXCp_AddLineAnnotationW!
I still don't understand whether I can create a rectangle as one annotation or I have to construct it generating 4 single lines? The third parameter (LPCPXC_RectF rect) in this function is according to the documentation a bounding rectangle of the
annotation. What is the use of it?
One more question. Is it possible to suppress the pop-up annotation dialog that appears after a double-click on the annotation? Are there any parameter in the PXCp_AddLineAnnotationW function to manage it. I've only found the flag PXC_AnnotsFlags.AF_ReadOnly of the PXC_CommonAnnotInfo class, that makes the line annotation (or the annotation bounding rectangle?) read-only.
I still don't understand whether I can create a rectangle as one annotation or I have to construct it generating 4 single lines? The third parameter (LPCPXC_RectF rect) in this function is according to the documentation a bounding rectangle of the
annotation. What is the use of it?
One more question. Is it possible to suppress the pop-up annotation dialog that appears after a double-click on the annotation? Are there any parameter in the PXCp_AddLineAnnotationW function to manage it. I've only found the flag PXC_AnnotsFlags.AF_ReadOnly of the PXC_CommonAnnotInfo class, that makes the line annotation (or the annotation bounding rectangle?) read-only.
Tracker Supp-Stefan
Site Admin
Posts:14208
Joined:Mon Jan 12, 2009 8:07 am
Location:London
Contact:
Hi Relapse,
It is definitely possible with the low level functions, but unfortunately the only High Level ones are for adding annotations - not for deleting them.
Best,
Stefan
It is definitely possible with the low level functions, but unfortunately the only High Level ones are for adding annotations - not for deleting them.
Best,
Stefan
Tracker Supp-Stefan
Site Admin
Posts:14208
Joined:Mon Jan 12, 2009 8:07 am
Location:London
Contact:
Hello relapse,
As mentioned before - there are no high level functions that will allow you to delete annotations.
So you will need to read on annotations in the PDF Reference:
http://wwwimages.adobe.com/www.adobe.co ... ce_1-7.pdf
section 8.4 Annotations in the above document
or section 12.4 Annotations in the ISO version of the file:
http://wwwimages.adobe.com/www.adobe.co ... 0_2008.pdf
And then utilize the low level functions described in
Alternatively - you could use JS while you have the files opened in the Viewer AX. This should be quite a lot easier to implement, and will still allow you to create/edit/delete annotations as needed.
Best,
Stefan
Tracker
As mentioned before - there are no high level functions that will allow you to delete annotations.
So you will need to read on annotations in the PDF Reference:
http://wwwimages.adobe.com/www.adobe.co ... ce_1-7.pdf
section 8.4 Annotations in the above document
or section 12.4 Annotations in the ISO version of the file:
http://wwwimages.adobe.com/www.adobe.co ... 0_2008.pdf
And then utilize the low level functions described in
3.2.5 PDF Dictionary Functionsof our PDF Tools SDK manual to read and manipulate the annotations dictionary as neeed.
Alternatively - you could use JS while you have the files opened in the Viewer AX. This should be quite a lot easier to implement, and will still allow you to create/edit/delete annotations as needed.
Best,
Stefan
Tracker
Tracker Supp-Stefan
Site Admin
Posts:14208
Joined:Mon Jan 12, 2009 8:07 am
Location:London
Contact:
Hi Relapse,
There are some snippets inside the manual, but there isn't anything more complex - as those are low level functions giving you access to the very structure of the PDF File and the way you would like to use such methods will greatly vary from case to case. You will need to get yourself acquainted with the PDF specification to be able to use those successfully.
Best,
Stefan
There are some snippets inside the manual, but there isn't anything more complex - as those are low level functions giving you access to the very structure of the PDF File and the way you would like to use such methods will greatly vary from case to case. You will need to get yourself acquainted with the PDF specification to be able to use those successfully.
Best,
Stefan
I do read the pdf specification
I cannot understand how it is possible to access the Annotations-dictionary of a ceratin page.
I've found the function PXCp_ObjectGetDictionary, but it needs an object handle. Where can I get it?
I cannot understand how it is possible to access the Annotations-dictionary of a ceratin page.
I've found the function PXCp_ObjectGetDictionary, but it needs an object handle. Where can I get it?
Tracker Supp-Stefan
Site Admin
Posts:14208
Joined:Mon Jan 12, 2009 8:07 am
Location:London
Contact:
Hi relapse,
You might want to use functions like PXCp_llGetObjectByIndex to obtain an object first, and then here is the sample from the manual for using the PXCp_ObjectGetDictionary function:Best,
Stefan
You might want to use functions like PXCp_llGetObjectByIndex to obtain an object first, and then here is the sample from the manual for using the PXCp_ObjectGetDictionary function:
Code: Select all
// Retrieve object's dictionary
HPDFOBJECT hObject;
...
HPDFDICTIONARY hDict;
hr = PXCp_ObjectGetDictionary(hObject, &hDict);
if (IS_DS_FAILED(hr))
{
// report error
...
}
Stefan
I try to use the PXC_Rect function in order to draw a real rectangle and not an annotation.
What is this identifier for the page content and how can I get it?
Thanks!
HRESULT PXC_Rect(
_PXCContent* content,
double left,
double top,
double right,
double bottom
);
Parameters
content [in] Parameter content specifies the identifier for the page content to which the function will be applied.
_PXCContent* content,
double left,
double top,
double right,
double bottom
);
Parameters
content [in] Parameter content specifies the identifier for the page content to which the function will be applied.
What is this identifier for the page content and how can I get it?
Thanks!
Tracker Supp-Stefan
Site Admin
Posts:14208
Joined:Mon Jan 12, 2009 8:07 am
Location:London
Contact:
Hi Relapse,
This method is from the PXCLIB40 set of functions - those are aimed at creating new PDF document from scratch - so you can not use that to just add a rectangle to an already existing page I am afraid.
Otherwise - you can see how the content identifier is to be set up in the sample projects in
C:\Program Files\Tracker Software\PDF-XChange PRO 4 SDK\Examples\SDKExamples\<<YOUR Programming language>>\PDFXCDemo
Best,
Stefan
This method is from the PXCLIB40 set of functions - those are aimed at creating new PDF document from scratch - so you can not use that to just add a rectangle to an already existing page I am afraid.
Otherwise - you can see how the content identifier is to be set up in the sample projects in
C:\Program Files\Tracker Software\PDF-XChange PRO 4 SDK\Examples\SDKExamples\<<YOUR Programming language>>\PDFXCDemo
Best,
Stefan
Thanks, Stefan, your patience is honorable.
Is there any difference between
HRESULT PXCp_Init(PDFDocument* pObject, LPCSTR Key, LPCSTR DevCode);
and
HRESULT PXC_NewDocument(_PXCDocument** pdf, LPCSTR key, LPCSTR devCode);
? Are the both parameters
I've tried to mix the use of the both libraries:but I've got an AccessViolationException executing the PXC_GetPage function.
I've also found no function to delete a new created (with PXC_Rect) graphical object or is it not possible at all?
Is there any difference between
HRESULT PXCp_Init(PDFDocument* pObject, LPCSTR Key, LPCSTR DevCode);
and
HRESULT PXC_NewDocument(_PXCDocument** pdf, LPCSTR key, LPCSTR devCode);
? Are the both parameters
andPDFDocument* pObjectidentical?_PXCDocument** pdf
I've tried to mix the use of the both libraries:
Code: Select all
int pageContentIdentifier;
int pdfHandle;
int pdfPage = 0;
PdfXchangePro.PXCp_Init(out pdfHandle, PdfXchangePro.SerialNumber, PdfXchangePro.DevelopmentCode);
PdfXchangePro.PXCp_ReadDocumentW(pdfHandle, _tempFile, 0);
PdfXchange.PXC_GetPage(pdfHandle, pdfPage, out pageContentIdentifier);
PdfXchange.PXC_Rect(pdfHandle, 20, 100, 100, 20);
I've also found no function to delete a new created (with PXC_Rect) graphical object or is it not possible at all?
Tracker Supp-Stefan
Site Admin
Posts:14208
Joined:Mon Jan 12, 2009 8:07 am
Location:London
Contact:
Hi Relapse,
I am afraid you can't mix methods from the two libraries. You will need to create and save a PDF file using the PXC_ methods, and then open it and modify it using the PCXp_ones.
As PCX_ methods are designed for building up PDF files - there are no delete methods - as you are creating a pdf file or page starting from an empty one and only adding the components you want.
Best,
Stefan
I am afraid you can't mix methods from the two libraries. You will need to create and save a PDF file using the PXC_ methods, and then open it and modify it using the PCXp_ones.
As PCX_ methods are designed for building up PDF files - there are no delete methods - as you are creating a pdf file or page starting from an empty one and only adding the components you want.
Best,
Stefan
Yesterday I managed to draw a rectangle I needed but the restriction is - it must be a new pdf document. It's a pity!
Now I'm trying to delete line annotations directly in dictionaries. By the way I can create a line annotation of any thickness with the function PXCp_AddLineAnnotationW, there is no such a limit of 20 points as in JS. But I miss very much any examples for handling the dictionaries. I've found an example in the forum http://www.tracker-software.com/forum3/ ... nnotationW but it's in C++ and I'm fighting with translation of the of the low-level functions' declarations into C#.
Now I'm trying to delete line annotations directly in dictionaries. By the way I can create a line annotation of any thickness with the function PXCp_AddLineAnnotationW, there is no such a limit of 20 points as in JS. But I miss very much any examples for handling the dictionaries. I've found an example in the forum http://www.tracker-software.com/forum3/ ... nnotationW but it's in C++ and I'm fighting with translation of the of the low-level functions' declarations into C#.
Tracker Supp-Stefan
Site Admin
Posts:14208
Joined:Mon Jan 12, 2009 8:07 am
Location:London
Contact:
Hello Relapse,
Glad to hear that you got it working. And great to hear there are no width limitations with the PXCp_AddLineAnnotationW method.
As for samples for handling dictionaries - I am afraid that I can't help - any any samples would probably be in the PDF Specification itself.
Best,
Stefan
Glad to hear that you got it working. And great to hear there are no width limitations with the PXCp_AddLineAnnotationW method.
As for samples for handling dictionaries - I am afraid that I can't help - any any samples would probably be in the PDF Specification itself.
Best,
Stefan
The best advice here is to look at the C# wrappers for other projects. It is important to use the proper marshalling for types like BSTR and LPWSTR (from C# "string" types). If you look at function declarations for DLL imports in C# you'll often see a function argument prefixed by something like:relapse wrote:Yesterday I managed to draw a rectangle I needed but the restriction is - it must be a new pdf document. It's a pity!
Now I'm trying to delete line annotations directly in dictionaries. By the way I can create a line annotation of any thickness with the function PXCp_AddLineAnnotationW, there is no such a limit of 20 points as in JS. But I miss very much any examples for handling the dictionaries. I've found an example in the forum http://www.tracker-software.com/forum3/ ... nnotationW but it's in C++ and I'm fighting with translation of the of the low-level functions' declarations into C#.
Code: Select all
[MarshalAs(UnmanagedType.LPWStr)]
Code: Select all
sometype somefunction([MarshalAs(UnmanagedType.LPWStr)] string InputLPWSTR);
UnmanagedType has a lot of members (LPWStr, BStr, etc) that you can specify for different scenarios. Check MSDN for details or use autocomplete in Visual Studio to see a list.
Also note the use of "ref" and "out" keywords that are used when the API function takes a pointer. "ref" means C# will check to see if the value is initialized; "out" means it may be uninitialized and is expected to be set by the function.
Code: Select all
E.g. C++:
HRESULT calculate_property_of_mystruct(mystruct* input, int* output);
would be imported into C# with:
... calculate_property_of_mystruct(ref mystruct input, out int output);
Lots of reading here:
http://msdn.microsoft.com/en-us/library/26thfadc.aspx
http://msdn.microsoft.com/en-us/library/fzhhdwae.aspx
|
Using a relay with the ESP32 or ESP8266 is a great way to control AC household appliances remotely. This tutorial explains how to control a relay module with the ESP32 or ESP8266 using MicroPython firmware.
We’ll take a look at how a relay module works, how to connect the relay to the ESP32 or ESP8266 boards and build a web server to control a relay remotely.
We have similar guides using Arduino IDE:
Guide for ESP32 Relay Module with Arduino IDE– Control AC Appliances + Web Server Example
Guide for ESP8266 Relay Module with Arduino IDE– Control AC Appliances + Web Server Example
Prerequisites
To follow this tutorial you need MicroPython firmware installed in your ESP32 or ESP8266 boards. You also need an IDE to write and upload the code to your board. We suggest using Thonny IDE or uPyCraft IDE:
Thonny IDE:
uPyCraft IDE:
Getting Started with uPyCraft IDE
Install uPyCraft IDE (Windows, Mac OS X, Linux)
Flash/Upload MicroPython Firmware to ESP32 and ESP8266
Learn more about MicroPython: MicroPython Programming with ESP32 and ESP8266 eBook.
Introducing Relays
A relay is an electrically operated switch and like any other switch, it that can be turned on or off, letting the current go through or not. It can be controlled with low voltages, like the 3.3V provided by the ESP32/ESP8266 GPIOs and allows us to control high voltages like 12V, 24V or mains voltage (230V in Europe and 120V in the US).
1, 2, 4, 8, 16 Channels Relay Modules
There are different relay modules with a different number of channels. You can find relay modules with one, two, four, eight and even sixteen channels. The number of channels determines the number of outputs we’ll be able to control.
There are relay modules whose electromagnet can be powered by 5V and with 3.3V. Both can be used with the ESP32 or ESP8266 – you can either use the VIN pin (that provides 5V) or the 3.3V pin.
Additionally, some come with built-in optocoupler that add an extra “layer” of protection, optically isolating the ESP boards from the relay circuit.
Get a relay module:
5V 2-channel relay module (with optocoupler)
5V 1-channel relay module (with optocoupler)
5V 8-channel relay module (with optocoupler)
5V 16-channel relay module (with optocoupler)
3.3V 1-channel relay module (with optocoupler)
Relay Pinout
For demonstration purposes, let’s take a look at the pinout of a 2-channel relay module. Using a relay module with a different number of channels is similar.
On the left side, there are two sets of three sockets to connect high voltages, and the pins on the right side (low-voltage) connect to the ESP GPIOs.
Mains Voltage Connections
The relay module shown in the previous photo has two connectors, each with three sockets: common (COM), Normally Closed (NC), and Normally Open (NO).
COM:connect the current you want to control (mains voltage).
NC (Normally Closed):the normally closed configuration is used when you want the relay to be closed by default. The NC are COM pins are connected, meaning the current is flowing unless you send a signal from the ESP to the relay module to open the circuit and stop the current flow.
NO (Normally Open):the normally open configuration works the other way around: there is no connection between the NO and COM pins, so the circuit is broken unless you send a signal from the ESP to close the circuit.
Control Pins
The low-voltage side has a set of four pins and a set of three pins. The first set consists of VCC and GND to power up the module, and input 1 (IN1) and input 2 (IN2) to control the bottom and top relays, respectively.
If your relay module only has one channel, you’ll have just one IN pin. If you have four channels, you’ll have four IN pins, and so on.
The signal you send to the IN pins, determines whether the relay is active or not. The relay is triggered when the input goes below about 2V. This means that you’ll have the following scenarios:
Normally Closed configuration (NC):
HIGH signal – current is flowing
LOW signal – current is notflowing
Normally Open configuration (NO):
HIGH signal – current is notflowing
LOW signal – current in flowing
HIGH signal – current is
You should use a normally closed configuration when the current should be flowing most of the times, and you only want to stop it occasionally.
Use a normally open configuration when you want the current to flow occasionally (for example, turn on a lamp occasionally).
Power Supply Selection
The second set of pins consists of GND, VCC, and JD-VCC pins. The JD-VCC pin powers the electromagnet of the relay. Notice that the module has a jumper cap connecting the VCC and JD-VCC pins; the one shown here is yellow, but yours may be a different color.
With the jumper cap on, the VCC and JD-VCC pins are connected. That means the relay electromagnet is directly powered from the ESP power pin, so the relay module and the ESP circuits are not physically isolated from each other.
Without the jumper cap, you need to provide an independent power source to power up the relay’s electromagnet through the JD-VCC pin. That configuration physically isolates the relays from the ESP with the module’s built-in optocoupler, which prevents damage to the ESP in case of electrical spikes.
Wiring a Relay Module to the ESP32/ESP8266
Warning: in this example, we’re dealing with mains voltage. Misuse can result in serious injuries. If you’re not familiar with mains voltage ask someone who is to help you out. While programming the ESP or wiring your circuit make sure everything is disconnected from mains voltage.
Alternatively, you can use a 12V power source to control 12V appliances.
ESP32 Schematic Diagram
Connect the relay module to the ESP32 as shown in the following diagram. The diagram shows wiring for a 2-channel relay module, wiring a different number of channels is similar.
In this example, we’re controlling a lamp. We just want to light up the lamp occasionally, so it is better to use a normally open configuration.
We’re connecting the IN1 pin to GPIO 26, you can use any other suitable GPIO. See ESP32 GPIO Reference Guide.
ESP8266 Schematic Diagram
Follow the next schematic diagram if you’re using an ESP8266.
We’re connecting the IN1 pin to GPIO 5, you can use any other suitable GPIO. See ESP8266 GPIO Reference Guide.
The best ESP8266 pins to use with relays are: GPIO 5, GPIO 4, GPIO 14, GPIO 12 and GPIO 13.
Controlling a Relay Module – MicroPython Code (Script)
The code to control a relay with the ESP32 or ESP8266 is as simple as controlling an LED or any other output. In this example, as we’re using a normally open configuration, we need to send a LOW signal to let the current flow, and a HIGH signal to stop the current flow.
Copy the following code to the main.py file and upload it to your board. It lights up your lamp for 10 seconds and turn it off for another 10 seconds.
# Complete project details at https://RandomNerdTutorials.com
from machine import Pin
from time import sleep
# ESP32 GPIO 26
relay = Pin(26, Pin.OUT)
# ESP8266 GPIO 5
#relay = Pin(5, Pin.OUT)
while True:
# RELAY ON
relay.value(0)
sleep(10)
# RELAY OFF
relay.value(1)
sleep(10)
How the code works
Import the Pin class from the machine module to interact with the GPIOs. We also import the sleep() method from the time module to add delays.
from machine import Pin
from time import sleep
Then, we define a Pin object called relay on 26 (if you’re using an ESP32) and define it as an output.
# ESP32 GPIO 26
relay = Pin(26, Pin.OUT)
In case you’re using an ESP8266, use GPIO 5 instead. Comment the previous line and uncomment the following.
# ESP8266 GPIO 5
#relay = Pin(5, Pin.OUT)
In the while loop, send a LOW signal to light up the lamp for 10 seconds.
# RELAY ON
relay.value(0)
sleep(10)
If you’re using a normally closed configuration, send a HIGH signal to light up the lamp.
Stop the current flow by sending a HIGH signal to the relay pin. If you’re using a normally closed configuration, send a LOW signal to stop the current flow.
# RELAY OFF
relay.value(1)
sleep(10)
Control Relay Module with MicroPython Web Server
In this section, we’ve created a web server example that allows you to control a relay remotely via web server.
boot.py
Copy the following code to your boot.py file.
# Complete project details at https://RandomNerdTutorials.com
try:
import usocket as socket
except:
import socket
from machine import Pin
import network
import esp
esp.osdebug(None)
import gc
gc.collect()
ssid = 'REPLACE_WITH_YOUR_SSID'
password = 'REPLACE_WITH_YOUR_PASSWORD'
station = network.WLAN(network.STA_IF)
station.active(True)
station.connect(ssid, password)
while station.isconnected() == False:
pass
print('Connection successful')
print(station.ifconfig())
# ESP32 GPIO 26
relay = Pin(26, Pin.OUT)
# ESP8266 GPIO 5
#relay = Pin(5, Pin.OUT)
Insert your network credentials in the following variables:
ssid = 'REPLACE_WITH_YOUR_SSID'password = 'REPLACE_WITH_YOUR_PASSWORD'
Uncomment one of the following lines accordingly to the board you’re using. By default, it’s set to use the ESP32 GPIO.
# ESP32 GPIO 26
relay = Pin(26, Pin.OUT)
# ESP8266 GPIO 5
#relay = Pin(5, Pin.OUT)
main.py
Copy the following to your main.py file.
# Complete project details at https://RandomNerdTutorials.com
def web_page():
if relay.value() == 1:
relay_state = ''
else:
relay_state = 'checked'
html = """<html><head><meta name="viewport" content="width=device-width, initial-scale=1"><style>
body{font-family:Arial; text-align: center; margin: 0px auto; padding-top:30px;}
.switch{position:relative;display:inline-block;width:120px;height:68px}.switch input{display:none}
.slider{position:absolute;top:0;left:0;right:0;bottom:0;background-color:#ccc;border-radius:34px}
.slider:before{position:absolute;content:"";height:52px;width:52px;left:8px;bottom:8px;background-color:#fff;-webkit-transition:.4s;transition:.4s;border-radius:68px}
input:checked+.slider{background-color:#2196F3}
input:checked+.slider:before{-webkit-transform:translateX(52px);-ms-transform:translateX(52px);transform:translateX(52px)}
</style><script>function toggleCheckbox(element) { var xhr = new XMLHttpRequest(); if(element.checked){ xhr.open("GET", "/?relay=on", true); }
else { xhr.open("GET", "/?relay=off", true); } xhr.send(); }</script></head><body>
<h1>ESP Relay Web Server</h1><label class="switch"><input type="checkbox" onchange="toggleCheckbox(this)" %s><span class="slider">
</span></label></body></html>""" % (relay_state)
return html
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
s.bind(('', 80))
s.listen(5)
while True:
try:
if gc.mem_free() < 102000:
gc.collect()
conn, addr = s.accept()
conn.settimeout(3.0)
print('Got a connection from %s' % str(addr))
request = conn.recv(1024)
conn.settimeout(None)
request = str(request)
print('Content = %s' % request)
relay_on = request.find('/?relay=on')
relay_off = request.find('/?relay=off')
if relay_on == 6:
print('RELAY ON')
relay.value(0)
if relay_off == 6:
print('RELAY OFF')
relay.value(1)
response = web_page()
conn.send('HTTP/1.1 200 OK\n')
conn.send('Content-Type: text/html\n')
conn.send('Connection: close\n\n')
conn.sendall(response)
conn.close()
except OSError as e:
conn.close()
print('Connection closed')
We won’t explain how this code works because we already have a very similar tutorial with detailed explanation of each line of code. Read the next project:
Demonstration
After making the necessary changes, upload the boot.py and main.py files to your board. Press the EN/RST button and in the Shell you should get the ESP IP address.
Then, open a browser in your local network and type the ESP IP address to get access to the web server.
You should get a web page with a toggle button that allows you to control your relay remotely using your smartphone or your computer.
Enclosure for Safety
For a final project, make sure you place your relay module and ESP inside an enclosure to avoid any AC pins exposed.
Wrapping Up
In this tutorial you’ve learned how to control relays with the ESP32 or ESP8266 using MicroPython. We have similar guides using Arduino IDE:
[Arduino IDE] Guide to control a Relay Module with the ESP32
[Arduino IDE] Guide to control a Relay Module with ESP8266
Controlling a relay with the ESP32 or ESP8266 is as easy controlling any other output, you just need to send HIGH and LOW signals as you would do to control an LED.
You can use our web server examples that control outputs to control relays. You just need to pay attention to the configuration you’re using. In case you’re using a normally open configuration, the relay works with inverted logic. You can use the following web server examples to control your relay:
ESP32 Web Server – Arduino IDE
ESP32 Web Server using SPIFFS (control outputs)
ESP32/ESP8266 MicroPython Web Server – Control Outputs
Learn more about MicroPython with the ESP32 and ESP8266 with our resources:
Thanks for reading.
|
使ç¨scikit-learn忏¸ææµå¤±ç¨æ·é¢æµ
å¨scikit-learnä¸ï¼æ´ç´ è´å¶æ¯æå¦ä¸ä¸ç§ç±»åï¼
髿¯æ´ç´ è´å¶æ¯
ä¸è¬ç¨äºè¿ç»åç¹å¾é¢æµæ¨¡å
å¤é¡¹å¼æ´ç´ è´å¶æ¯
ä¸è¬ç¨äºç¦»æ£åç¹å¾é¢æµæ¨¡å
伯åªå©æ´ç´ è´å¶æ¯
ä¸è¬ç¨äºäºé¡¹åå¸çç¹å¾é¢æµæ¨¡å
以䏿¯è¿ä¸ç§æ¨¡åçç¨æ³
from sklearn.naive_bayes import GaussianNB
from sklearn.naive_bayes import MultinomialNB
from sklearn.naive_bayes import BernoulliNB
# æž„é€ ç‰¹å¾å€¼ x
# æž„é€ æ ‡æ³¨æ•°æ® y
clfG = GaussianNB().fit(x, y)
clfM = MultinomialNB().fit(x, y)
clfB = BernoulliNB().fit(x, y)
ä¸é¢ä¸¾ä¸ä¸ªå ·ä½çä½¿ç¨æ¡ä¾ï¼å©ç¨æ´ç´ è´å¶æ¯æ¥é¢æµæ¸¸æçæµå¤±ç¨æ·ãé¦å æ¯åå¤ç¨æ ·æ¬æ°æ®ï¼æ ¹æ®å¯¹æ¸¸æççè§£ï¼éåäºå¦ä¸ç¹å¾ï¼
#æä»¶è¯´æ# ç»è®¡æ¥æ# appid# ç¨æ·æ è¯# æµå¤±æ è¯# å䏤卿´»è·å¤©æ°# å两å¨ç´¯è®¡ç»é次æ°# å两å¨ç´¯è®¡æ¸¸ææ¶é¿# åä¸å¨æ´»è·å¤©æ°# åä¸å¨ç´¯è®¡ç»é次æ°# åä¸å¨ç´¯è®¡æ¸¸ææ¶é¿# æåç»å½æ¥æ# ç¨æ·æ¸¸æçå½å¤©æ°# 8 * åä¸å¨ç´¯è®¡ç»é天æ°/ç¨æ·æ¸¸æçå½å¤©æ°# æ´»è·å¤©æ°è¶å¿# ç»é次æ°å¨è¶å¿# æ¸¸ææ¶é¿å¨è¶å¿# å两å¨ç´¯è®¡ä»è´¹å¤©æ°# å两å¨ç´¯è®¡ä»è´¹# å两å¨ç´¯è®¡ä»è´¹æ¬¡æ°# å两å¨ç´¯è®¡ä»è´¹å¤©æ°# å两å¨ç´¯è®¡ä»è´¹# å两å¨ç´¯è®¡ä»è´¹æ¬¡æ°# ä»è´¹å¤©æ°å¨è¶å¿# ä»è´¹éé¢å¨è¶å¿
ä¸é¢å¼å§è¯»å ¥æ°æ®
import pandas as pd
from matplotlib import pyplot as plt
%matplotlib inline
df=pd.read_table("lost_user_sample", header = None)
æé æ ·æ¬æ°æ®
y=df[3]
print y.values
x=df[[4,7, 5,8, 6,9, 18,21, 19,22, 20,23, 15,16,17, 24,25]]
print x.values
åå«ä½¿ç¨é«æ¯æ´ç´ è´å¶æ¯ãå¤é¡¹å¼æ´ç´ è´å¶æ¯æ¥å颿µ
from sklearn.naive_bayes import GaussianNB
from sklearn.naive_bayes import MultinomialNB
from sklearn.cross_validation import train_test_split
from sklearn import metrics
print "æ€»æ ·æœ¬æ•°ï¼š",len(y)
x_data_train,x_data_test,y_data_train,y_data_test = train_test_split(x, y, test_size=0.2, random_state=1)
train_count = len(y_data_train)
print "è®ç»ƒæ ·æœ¬æ•°ï¼š", train_count
#--------------------------------------------------------
print "\n\nGaussianNB:"
clf = GaussianNB().fit(x_data_train, y_data_train)
acc_test = clf.score(x_data_test,y_data_test)
acc_all = clf.score(x,y)
print "测试集精度:" , acc_test
print "总精度:" , acc_all
y_pred = clf.predict(x_data_test)
print metrics.accuracy_score(y_data_test, y_pred)
print metrics.confusion_matrix(y_data_test, y_pred)
print metrics.recall_score(y_data_test, y_pred)
#--------------------------------------------------------
print "\n\nMultinomialNB:"
clf_MNB = MultinomialNB().fit(x_data_train,y_data_train)
acc_test = clf_MNB.score(x_data_test,y_data_test)
acc_all = clf_MNB.score(x,y)
print "测试集精度:" , acc_test
print "总精度:" , acc_all
y_pred = clf_MNB.predict(x_data_test)
print metrics.accuracy_score(y_data_test, y_pred)
print metrics.confusion_matrix(y_data_test, y_pred)
print metrics.recall_score(y_data_test, y_pred)
|
Hello Guys How are you all ? Hope you all are fine. I am trying to read macro enabled excel work sheet using pandas.read_excel with xlrd library. But I am facing xlrd.biffh.XLRDError: Excel xlsx file; not supported this error.
So here I am come with all possible solution .
What is Error
When you try to read macro enabled excel work sheet using pandas.read_excel with xlrd library its through error like below
2020-12-12T21:09:53.441+05:30 [APP/PROC/WEB/0] [ERR] df1=pd.read_excel(os.path.join(APP_PATH, os.path.join("Data", "aug_latest.xlsm")),sheet_name=None)
2020-12-12T21:09:53.441+05:30 [APP/PROC/WEB/0] [ERR] return open_workbook(filepath_or_buffer)
2020-12-12T21:09:53.441+05:30 [APP/PROC/WEB/0] [ERR] File "/home/vcap/deps/0/python/lib/python3.8/site-packages/xlrd/__init__.py", line 170, in open_workbook
2020-12-12T21:09:53.441+05:30 [APP/PROC/WEB/0] [ERR] raise XLRDError(FILE_FORMAT_DESCRIPTIONS[file_format]+'; not supported')
2020-12-12T21:09:53.441+05:30 [APP/PROC/WEB/0] [ERR] xlrd.biffh.XLRDError: Excel xlsx file; not supported
Solutions
xlrd.biffh.XLRDError: Excel xlsx file; not supported
The latest version of xlrd(2.01) only supports .xls files. Installing the older version 1.2.0 worked for me to open .xlsx files.
Solution 1
Installing the older version, 1.2.0, worked for me to open .xlsx files.
Solution 2
The latest version of xlrd (2.0.1) only supports .xls files.
If you are prepared to risk potential security vulnerabilities, and risk incorrect parsing of certain files, this error can be solved by installing an older version of xlrd.
Use the command below in a shell or cmd prompt:
pip install xlrd==1.2.0
Solution 3
This is due to potential security vulnerabilities relating to the use of xlrd version 1.2 or earlier for reading .xlsx files.
In your case, the solution is to:
install openpyxl: https://openpyxl.readthedocs.io/en/stable/
change your pandas code to be:
pandas.read_excel(‘cat.xlsx’, engine=’openpyxl’)
So Hope This Above 3 Solution Will Work For You Too. So it’s all About xlrd.biffh.XLRDError: Excel xlsx file; not supported error. Hope this tutorial helped you a lot. Comment below Your thoughts and your queries. And Also Comment on your suggestion here.
|
I have sample response with friends list from facebook:
[{u'uid': 513351886, u'name': u'Mohammed Hossein', u'pic_small': u'http://profile.ak.fbcdn.net/hprofile-ak-snc4/hs643.snc3/27383_513351886_4933_t.jpg'},
{u'uid': 516583220, u'name': u'Sim Salabim', u'pic_small': u'http://profile.ak.fbcdn.net/hprofile-ak-snc4/hs348.snc4/41505_516583220_5681339_t.jpg'}]
How I could parse through this list encoding key's of the dictionaries to ascii ? I've tried something like this :
response = simplejson.load(urllib.urlopen(REST_SERVER, data))
for k in response:
for id, stuff in k.items():
id.encode("ascii")
logging.debug("id: %s" % id)
return response
But encoded keys are not saved and as a result I'm still getting unicode values.
First: do you really need to do this? The strings are in Unicode for a reason: you simply can't represent everything in plain ASCII that you can in Unicode. This probably won't be a problem for your dictionary keys 'uid', 'name' and 'pic_small'; but it probably won't be a problem to leave them as Unicode, either. (The 'simplejson' library does not know anything about your data, so it uses Unicode for every string - better safe than sorry.)
Anyway:
In Python, strings cannot be modified. The .encode method does not change the string; it returns a new string that is the encoded version.
What you want to do is produce a new dictionary, which replaces the keys with the encoded keys. We can do this by passing each pair of (encoded key, original value) as *args for the dict constructor.
That looks like:
dict((k.encode('ascii'), v) for (k, v) in original.items())
Similarly, we can use a list comprehension to apply this to every dictionary, and create the new list. (We can modify the list in-place, but this way is cleaner.)
response = simplejson.load(urllib.urlopen(REST_SERVER, data))
# We create the list of modified dictionaries, and re-assign 'response' to it:
response = [
dict((k.encode('ascii'), v) for (k, v) in original.items()) # the modified version
for original in response # of each original dictionary.
]
return response
Your other responses hint at this but don't come out and say it: dictionary lookup and string comparison in Python transparently convert between Unicode and ASCII:
>>> x = {u'foo':'bar'} # unicode key, ascii value
>>> x['foo'] # look up by ascii
'bar'
>>> x[u'foo'] # or by unicode
'bar'
>>> x['foo'] == u'bar' # ascii value has a unicode equivalent
True
So for most uses of a dictionary converted from JSON, you don't usually need to worry about the fact that everything's Unicode.
|
A Python Tutorial, the Basics
ð A very easy Python Tutorial! ð
#Tutorial Jam
@elipie's jam p i n g
p i n g
Here is a basic tutorial for Python, for beginners!
Table of Contents:
1. The developer of python
2. Comments/Hashtags
3. Print and input statements
f' strings
4. If, Elif, Else statements
5. Common Modules
1. Developer of Python
It was created in the late 1980s by Guido van Rossum in the Netherlands. It was made as a successor to the ABC language, capable interfacing with the Ameoba operating system. It's name is python because when he was thinking about Python, he was also reading, 'Monty Python's Flying Circus'. Guido van Rossum thought that the langauge would need a short, unique name, so he chose Python.
For more about Guido van Rossum, click here
2. Comments/Hastags
Comments are side notes you can write in python. They can be used, as I said before:
sidenotes
instructions or steps
etc.
How to write comments:
#This is a comment
The output is nothing because:
It is a comment and comments are invisible to the computer
Comments are not printed in Python
So just to make sure, hastags are used to make comments. And remember, comments are ignored by the computer.
3. Print and Input statements
1. Print Statements
Print statements, printed as print, are statements used to print sentences or words. So for example:
print("Hello World!")
The output would be:
Hello World!
So you can see that the print statement is used to print words or sentences.
2. Input Statements
Input statements, printed as input, are statements used to 'ask'. For example:
input("What is your name?")
The output would be:
What is your name?
However, with inputs, you can write in them. You can also 'name' the input. Like this:
name = input("What is your name?")
You could respond by doing this:
What is your name? JBYT27
So pretty much, inputs are used to make a value that you can make later.
Then you could add a if statement, but lets discuss that later.
3. f strings
f strings, printed as f(before a qoutation), is used to print or input a value already used. So what I mean is, say I put an f string on a print statement. Like this:
print(f"")
The output right now, is nothing. You didn't print anything. But say you add this:
print(f"Hello {name}!")
It would work, only if the name was named. In other words, say you had a input before and you did this to it:
name = input()
Then the f string would work. Say for the input, you put in your name. Then when the print statement would print:
Hello (whatever your name was)!
Another way you could do this is with commas. This won't use an f string either. They are also simaler. So how you would print it is like this:
name = input()
...
print("Hello ", name, "!")
The output would be the same as well! The commas seperate the 2 strings and they add the name inside. But JBYT27, why not a plus sign? Well, this question you would have to ask Guido van Rossum, but I guess I can answer it a bit. It's really the python syntax. The syntax was made that so when you did a plus sign, not a comma, it would give you an error.
Really, the only time you would use this is to give back your name, or to find if one value was equal to each other, which we'll learn in a sec.
4. If, Elif, Else Statements
1. If Statements
If statements, printed as if, are literally as they are called, if sentences. They see if a sentence equals or is something to an object, it creates an effect. You could think an if statement as a cause and effect. An example of a if statement is:
name = input("What is your name?")
#asking for name
if name == "JBYT27":
print("Hello Administrator!")
The output could be:
What is your name? JBYT27Hello Administrator!
However, say it isn't JBYT27. This is where the else, elif, try, and except statements comes in!
2. Elif Statements
Elif statements, printed as elif are pretty much if statements. It's just that the word else and if are combined. So say you wanted to add more if statements. Then you would do this:
if name == "JBYT27":
print("Hello Administrator!")
elif name == "Code":
print("Hello Code!")
It's just adding more if statements, just adding a else to it!
3. Else Statements
Else statments, printed as else, are like if and elif statements. They are used to tell the computer that if something is not that and it's not that, go to this other result. You can use it like this (following up from the other upper code):
if name == "JBYT27":
print("Hello admin!")
elif name == "Squid":
print("Hello Lord Squod!")
else:
print(f"Hello {name}!")
5. Common Modules
Common modules include:
os
time
math
sys
replit
turtle
tkinter
random
etc.
So all these modules that I listed, i'll tell you how to use, step by step! ;) But wait, what are modules?
Modules are like packages that are pre-installed in python. You just have to fully install it, which is the module(please correct me if im wrong). So like this code:
import os
...
When you do this, you successfully import the os module! But wait, what can you do with it? The most common way people use the os module is to clear the page. By means, it clears the console(the black part) so it makes your screen clear-er. But, since there are many, many, many modules, you can also clear the screen using the replit module. The code is like this:
import replit
...
replit.clear()
But one amazing thing about this importing is you can make things specific. Like say you only want to import pi and sqrt from the math package. This is the code:
from math import pi, sqrt
Let me mention that when you do this, never, ever add an and. Like from ... import ... and .... That is just horrible and stupid and... Just don't do it :)
Next is the time module
You can use the time module for:
time delay
scroll text
And yeah, that's pretty much it (i think)
Note:
All of the import syntax is the same except for the names
Next is tkinter, turtle
You can use the tkinter module for GUI's (screen playing), you can import it in a normal python, or you can do this in a new repl.
You can use the turtle for drawing, it isn't used much for web developing though.
The math and sys
The math is used for math calculations, to calculate math. The sys is used for accessing used variables. I don't really know how I could explain it to you, but for more, click here
Random
The random module is used for randomizing variables and strings. Say you wanted to randomize a list. Here would be the code:
import random
...
a_list = ["JBYT27","pie","cat","dog"]
...
random.choice(a_list)
The output would be a random choice from the variable/list. So it could be pie, JBYT27, cat, or dog. From the random module, there are many things you can import, but the most common are:
choice
randrange
etc.
And that's all for modules. If you want links, click below.
Links for modules:
And that's it!
Hooray! We made it through without sleeping!
Credits to:
Many coders for tutorials
Books and websites
replit
etc.
Links:
Web links:
ranging from a few days or hoursifyoulikereading
Video links:
ranging from 1-12 hoursifyoudon'tlike reading
Otherwise:
ranging from 5 hours-a few daysreplittutorial links
I hope you enjoyed this tutorial! I'll cya on the next post!
stay safe!
You are viewing a single comment. View All
|
[1] Python Made EZ! ð
HîïÃīįì everyone!
Hope y'all are doing great! School is starting real soon, so I hope you have been studying to get ready you are enjoying the last of vacation!
So I made this tutorial on python so that others can try to learn from it and get better! Hopefully, what I say will be comprehensive and easy to read.
Most of it I will write, but sometimes I will include some stuff from other websites which explain better than me. I will put what I've taken in italic, and the sources and helpful links at the bottom.
By the way, this is the first of tutorials in languages I'm making!
I will be covering:
Hello World!: History of Python
Key Terms
Comments
print
Data Types
Variables
- Printing Variables
- Naming Variables
- Changing Variables
Concatenation
Operators
Comparison Operators
Conditionals
-if
-elif
-else
input
A Bit of Lists
forLoops
whileLoops
Functions
Imports
-time
-random
-math
Small Programs and Useful Stuff
ANSIEscape Codes
Links
Goodbye World!: End
Well without any further ado, let's get on with it!
Hello World!: History of Python
Python is a general purpose programming language. It was created by Guido Van Rossum and released in 1991. One of the main features of it is its readability, simple syntax, and few keywords, which makes it great for beginners (with no prior experience of coding) to learn it.
Fun fact: Guido Van Rossum was reading the scripts of Monty Python when he was creating the language; he needed "a name that was short, unique, and slightly mysterious" so he decided to call the language Python.
(Last year we had to make a poem on a important person in Computer Science, so I made one on him: https://docs.google.com/document/d/1yf2T2fFaS3Vwk7zkvN1nPOr8XPXJroL1yHI7z5qhaRc/edit?usp=sharing)
Key Terms
Now before we continue, just a few words you should know:
Console: The black part located at the right/bottom of your screen
Input: stuff that is taken in by the computer (more on this later)
Ouput: the information processed and sent out by the computer (usually in the console)
Errors: actually, a good thing! Don't worry if you have an error, just try to learn from it and correct it. That's how you can improve, by knowing how to correct errors.
Execute: run a piece of code
Comments
Comments are used for explaining your code, making it more readable, and to prevent execution when testing code.
This is how to comment:
# this is a comment# it starts with a hashtag ## Python will ignore and not run anything after the hashtag
You can also have multiline comments:
"""this is a multiline commentI can make it very long!"""
The print() functions is used for outputting a message (object) onto the console. This is how you use it:
print("Something.")
# remember this is a comment
# you can use double quotes "
# or single quotes '
print('Using single quotes')
print("Is the same as using double quotes")
You can also triple quotes for big messages.
Example:
print("Hello World!")
print("""
Rules:
[1] Code
[2] Be nice
[3] Lol
[4] Repeat
""")
Output:
Hello World!Rules:[1] Code [2] Be nice[3] Lol[4] Repeat
Data Types
Data types are the classification or categorization of data items.
These are the 4 main data types:
int: (integer) a whole number
12 is an int, so is 902.
str: (string) a sequence of characters
"Hi" is a str, so is "New York City".
float: (float) a decimal
-90 is a float, so is 128.84
bool: (boolean) data type with 2 possible values; True and False
Note that True has a capital T and
False has a
capital!
F
Variables
Variables are used for containing/storing information.
Example:
name = "Lucy" # this variable contains a str
age = 25 # this variable contains an int
height = 160.5 # this variable contains a float
can_vote = True # this variable contains a Boolean that is True (because Lucy is 25 y/o)
Printing variables:
To print variables, you simply do print(variableName):
print(name)
print(age)
print(height)
print(can_vote)
Output:
Lucy
25
160.5
True
Naming Variables:
You should try to make variables with a descriptive name. For example, if you have a variable with an age, an appropriate name would be age, not how_old or number_years.
Some rules for naming variables:
must start with a letter (not a number)
no spaces (use underscores)
no keywords (like print,input,or, etc.)
Changing Variables:
You can change variables to other values.
For example:
x = 18
print(x)
x = 19
print(x)
# the output will be:
# 18
# 19
As you can see, we have changed the variable x from the initial value of 18 to 19.
Concatenation
Let's go back to our first 3 variables:
name = "Lucy"
age = 25
height = 160.5
What if we want to make a sentence like this:Her name is Lucy, she is 25 years old and she measures 160.5 cm.
Of course, we could just print that whole thing like this:print("Her name is Lucy, she is 25 years old and she measures 160.5 cm.")
But if we want to do this with variables, we could do it something like this:
print("Her name is " + name + ", she is " + age + " years old and she measures " + height + " cm.")
# try running this!
Aha! If you ran it, you should have gotten this error:
Basically, it means that you cannot concatenate int to str. But what does concatenate mean?
Concatenate means join/link together, like the concatenation of "sand" and "castle" is "sandcastle"
In the previous code, we want to concatenate the bits of sentences ("Her name is ", ", she is", etc.) as well as the variables (name, age, and height).
Since the computer can only concatenate str together, we simply have to convert those variables into str, like so:
print("Her name is " + name + ", she is " + str(age) + " years old and she measures " + str(height) + " cm.")
# since name is already a str, no need to convert it
Output:
Her name is Lucy, she is 25 years old and she measures 160.5 cm.
Operators
A symbol or function denoting an operation
Basically operators can be used in math.
List of operators:
+For adding numbers (can also be used for concatenation) | Eg: 12 + 89 = 101
-For subtracting numbers | Eg: 65 - 5 = 60
*For multiplying numbers | Eg: 12 * 4 = 48
/For dividing numbers | Eg: 60 / 5 = 12
**Exponentiation ("to the power of") | Eg: 2**3 = 8
//Floor division (divides numbers and takes away everything after the decimal point) | Eg: 100 // 3 = 33
%Modulo (divides numbers and returns whats left (remainder)) | Eg: 50 % 30 = 20
These operators can be used for decreasing/increasing variables.
Example:
x = 12
x += 3
print(x)
# this will output 15, because 12 + 3 = 15
You can replace the + in += by any other operator that you want:
x = 6
x *= 5
print(x)
y = 9
y /= 3
print(y)
# this will output 30 and then below 3.
Also: x += y is just a shorter version of writing x = x + y; both work the same
Comparison Operators
Comparsion operators are for, well, comparing things. They return a Boolean value, True or False. They can be used in conditionals.
List of comparison operators:
==equal to | Eg: 7 == 7
!=not equal to | Eg: 7 != 8
>bigger than | Eg: 12 > 8
<smaller than | Eg: 7 < 9
>=bigger than or equal to | Eg: 19 >= 19
<=smaller than or equal to | Eg: 1 <= 4
If we type these into the console, we will get either True or False:
6 > 7 # will return False
12 < 80 # will return True
786 != 787 # will return True
95 <= 96 # will return True
Conditionals
Conditionals are used to verify if an expression is True or False.
if
Example: we want to see if a number is bigger than another one.
How to say in english: "If the number 10 is bigger than the number 5, then etc.
How to say it in Python:
if 10 > 5:
# etc.
All the code that is indented will be inside that if statement. It will only run if the condition is verified.
You can also use variables in conditionals:
x = 20
y = 40
if x < y:
print("20 is smaller than 40"!)
# the output of this program will be "20 is smaller than 40"! because the condition (x < y) is True.
elif
elif is basically like if; it checks if several conditions are True
Example:
age = 16
if age == 12:
print("You're 12 years old!")
elif age == 14:
print("You're 14 years old!")
elif age == 16:
print("You're 16 years old!")
This program will output:
You're 16 years old!
Because age = 16.
else
else usually comes after the if/elif. Like the name implies, the code inside it only executes if the previous conditions are False.
Example:
age = 12
if age >= 18:
print("You can vote!")
else:
print("You can't vote yet!)
Output:
You can't vote yet!
Because age < 18.
input
The input function is used to prompt the user. It will stop the program until the user types something and presses the return key.
You can assign the input to a variable to store what the user types.
For example:
username = input("Enter your username: ")
# then you can print the username
print("Welcome, "+str(username)+"!")
Output:
Enter your username: Bookie0Welcome, Bookie0!
By default, the input converts what the user writes into str, but you can specify it like this:
number = int(input("Enter a number: ")) # converts what the user says into an int
# if the user types a str or float, then there will be an error message.
# doing int(input()) is useful for calculations, now we can do this:
number += 10
print("If you add 10 to that number, you get: "+ str(number)) # remember to convert it to str for concatenation!
Output:
Enter a number: 189If you add 10 to that number, you get: 199
You can also do float(input("")) to convert it to float.
Now, here is a little program summarizing a bit of what you've learnt so far.
Full program:
username = input("Username: ")
password = input("Password: ")
admin_username = "Mr.ADMIN"
admin_password = "[email protected]"
if username == admin_username:
if password == admin_password:
print("Welcome Admin! You are the best!")
else:
print("Wrong password!")
else:
print("Welcome, "+str(username)+"!")
Now a detailed version:
# inputs
username = input("Username: ") # asks user for the username
password = input("Password: ") # asks user for the password
# variables
admin_username = "Mr.ADMIN" # setting the admin username
admin_password = "[email protected]" # setting the admin passsword
# conditionals
if username == admin_username: # if the user entered the exact admin username
if password == admin_password: # if the user enters the exact and correct admin password
print("Welcome Admin! You are the best!") # a welcome message only to the admin
else: # if the user gets the admin password wrong
print("Error! Wrong password!") # an error message appears
else: # if the user enters something different than the admin username
print("Welcome, general user "+str(username)+"!") # a welcome message only for general users
Output:
An option:
Username: Mr.ADMINPassword: i dont knowError! Wrong password!
Another option:
Username: Mr.ADMINPassword: [email protected]Welcome Admin! You are the best!
Final option:
Username: BobPassword: Chee$eWelcome, general user Bob!
A bit of lists
A list is a collection which is ordered and changeable. They are written with square braquets: []
meat = ["beef", "lamb", "chicken"]
print(meat)
Output:
['beef', 'lamb', 'chicken']
You can access specific items of the list with the index number. Now here is the kinda tricky part. Indexes start at 0, meaning that the first item of the list has an index of 0, the second item has an index of 1, the third item has an index of 2, etc.
meat = ["beef", "lamb", "chicken"]
# Index: 0 1 2 etc.
print(meat[2]) # will output "chicken" because it is at index 2
You can also use negative indexing: index -1 means the last item, index -2 means the second to last item, etc.
meat = ["beef", "lamb", "chicken"]
# Index: -3 -2 -1 etc.
print(meat[-3]) # will output "beef" because it is at index -3
You can add items in the list using append():
meat = ["beef", "lamb", "chicken"]
meat.append("pork")
print(meat)
Output:
['beef', 'lamb', 'chicken', 'pork']
"pork" will be added at the end of the list.
For removing items in the list, use remove():
meat = ['beef', 'lamb', 'chicken']
meat.remove("lamb")
print(meat)
Output:
['beef', 'chicken']
You can also use del to remove items at a specific index:
meat = ['beef', 'lamb', 'chicken']
del meat[0]
print(meat)
Output:
['lamb', 'chicken']
There are also many other things you can do with lists, check out this: https://www.w3schools.com/python/python_lists.asp for more info!
for loops
A for loop is used for iterating over a sequence. Basically, it runs a piece of code for a specific number of times.
For example:
for i in range(5):
print("Hello!")
Output:
Hello!Hello!Hello!Hello!Hello!
You can also use the for loop to print each item in a list (using the list from above):
meat = ['beef', 'lamb', 'chicken']
for i in meat:
print(i)
Output:
beeflambchicken
while loops
while loops will run a piece of code as long as the condition is True.
For example:
x = 1 # sets x to 1
while x <= 10: # will repeat 10 times
print(x) # prints x
x += 1 # increments (adds 1) to x
Ouput:
12345678910
You can also make while loops go on for infinity, like so (useful for spamming lol):
while True:
print("Theres no stopping me nowwwww!")
Output:
Theres no stopping me nowwwww!Theres no stopping me nowwwww!Theres no stopping me nowwwww!Theres no stopping me nowwwww!Theres no stopping me nowwwww!# etc until infinity
Functions
Functions are a group of code that will only execute when it is called.
For example, instead having to type a piece of code several times, you can use a function to put that piece of code inside, and then when you need to use it, you can just call it.
def greeting(): # defining the function
print("Bonjour!") # everything that is indented will be executed when the function is called
greeting() # calling the function
# you can now call this function when you want, instead of always writing the same code everytime
Output:
Bonjour!
return and arguments
The return statement is used in function. It ends the function and "returns" the result, i.e. the value of the expression following the return keyword, to the caller. It is not mandatory; you don't have to use it.
You can also have arguments inside a functions. This allows you to change the function values. The arguments are in the parenthesis.
For example:
def sum(x, y): # x and y are the arguments
total = x + y
return total # "assigns" x + y to the function
result = sum(4, 5) # you can change those to what you want
print(result) # this will output 9, because 4+5 = 9
Imports
time
You can use time in your Python programs.
How to make the program wait:
# first import time
import time
print("Hello!")
# then for the program to wait
time.sleep(1) # write how long you want to wait (in seconds) in the parenthesis
print("Bye!")
Output:
Hello!# program will wait 1 secondBye!
You can also do this (more simpler):
import time
from time import sleep
# instead of time.sleep(), do sleep()
# its the same
print("time.sleep(1)...")
time.sleep(1)
print("...is the same as...")
sleep(1)
print("sleep(1)!")
random
You can use the random module to randomly pick numbers with randint():
# remember to import!
import random
from random import randint
rand_num = randint(1,5)
# this will output a random number between 1 and 5 inclusive!
# this means the possible numbers are 1, 2, 3, 4, or 5
The reason I am precising this is because you can also use randrange():
import random
from random import randrange
rand_num = randrange(1,5)
# this will output a random number between 1 inclusive and 5 NON-inclusive (or 4 inclusive)!
# this means the possible numbers are 1, 2, 3, or 4
You can also randomly pick an item from a list with choice():
import random
from random import choice
meat = ["beef", "lamb", "chicken"]
rand_meat = choice(meat)
print(rand_meat)
# this will output a randomly chosen item of the list meat
# the possible outcomes are beef, lamb, or chicken.
math
First, you already have some functions already built in Python: min() and max(). They return the smallest and biggest value of ints inside the parenthesis, respectively.
For example:
list_a = min(18, 12, 14, 16)
list_b = max(17, 19, 15, 13)
print(list_a) # will output 12
print(list_b) # will output 19
Now for some more modules:
You can use math.floor() and math.ceil() to round up numbers to the nearest or highest int.
For example:
# first import
import math
num_a = math.floor(2.3)
num_b = math.ceil(2.3)
print(num_a) # will output 2
print(num_b) # will output 3
Explanation (from Andrew Sutherland's course): So math.floor() will round up 2.3 to the nearest lowest int, which in this case is 2. This is because, if you imagine it, the floor is on the bottom, so thats why it will round the number to the nearest lowest int.
Vice-versa for math.ceil(); it will round up 2.3 to the nearest highest int, which in this case is 3. This is because ceil is short for ceiling (programmers like to shorten words), and the ceiling is high.
You can also get pi Ï:
import math
pi = math.pi
print(pi)
Output:
3.141592653589793
Here is the full list of all the things you can do with math: https://www.w3schools.com/python/module_math.asp
Small Programs You Can Use
Countdown Program:
# imports
import time
from time import sleep
def countdown(): # making a function for the countdown (so you can use it several times)
count = int(input("Countdown from what? ")) # asks user how long the countdown
while count >= 0: # will repeat until count = 0
print(count) # prints where the countdown is at
count -= 1 # subtracts 1 from count
sleep(1) # program waits 1 second before continuing
print("End of countdown!") # message after the countdown
countdown() # remember to call the function or nothing will happen
Output:
Countdown from what? 5543210End of countdown!
Simple Calculator
First way using eval()
calculation = input("Type your calculation: ") # asks the user for a calculation.
print("Answer to " + str(calculation) + ": " + str(eval(calculation)))
# eval basically does the operation, like on a normal calculator.
# however, if you write something different than a valid operaion, there will be an error.
Or another way, using several conditionals, and you can only do "something" + "something" (but with the operators):
def calculator(): # making a function to hold all the code for calculator
while True: # loops forever so you can make several calculations without having to press run again
first_num = int(input("Enter 1st number: ")) # asks user for 1st number
second_num = int(input("Enter 2nd number: ")) # asks user for 2nd number
operator = input("Select operator: + - * / ** // ") # asks user for operator
if operator == "+": # addition
answer = first_num + second_num
print(answer)
elif operator == "-": # subtraction
answer = first_num - second_num
print(answer)
elif operator == "*": # multiplication
answer = first_num * second_num
print(answer)
elif operator == "/": # division
answer = first_num / second_num
print(answer)
elif operator == "**": # exponentiation ("to the power of")
answer = first_num ** second_num
print(answer)
elif operator == "//": # floor division
answer = first_num // second_num
print(answer)
else: # if user selects an invalid operator
print("Invalid!")
calculator() # calls the function
But obviously that is pretty long and full of many if/elif.
Some functions that are useful:
"Press ENTER to continue" Prompt:
def enter():
input("Press ENTER to continue! ")
# this is useful for text based adventure games; when they finish reading some text, they can press ENTER and the next part will follow.
# just call the function where you need it
Spacing in between lines function:
def space():
print()
print()
# same as pressing ENTER twice, this is useful to make your text a bit more airy, makes it less compact and block like.
Slowprint:
# first imports:
import time, sys
from time import sleep
def sp(str):
for letter in str:
sys.stdout.write(letter)
sys.stdout.flush()
time.sleep(0.06)
print()
# to use it:
sp("Hello there!")
# this will output Hello There! one letter every 0.06 seconds, making it look like the typewriter effect.
ANSI Escape Codes
ANSI escape codes are for controlling text in the console. You can use it to make what is in the output nicer for the user.
For example, you can use \n for a new line:
name = input("Enter your name\n>>> ")
Output:
Enter your name>>>
This makes it look nice, you can start typing on the little prompt arrows >>>.
You can also use \t for tab:
print("Hello\tdude")
Output:
Hello dude
\v for vertical tab:
print("Hello\vdude")
Output:
Hello dude
You can also have colors in python:
# the ANSI codes are stored in variables, making them easier to use
black = "\033[0;30m"
red = "\033[0;31m"
green = "\033[0;32m"
yellow = "\033[0;33m"
blue = "\033[0;34m"
magenta = "\033[0;35m"
cyan = "\033[0;36m"
white = "\033[0;37m"
bright_black = "\033[0;90m"
bright_red = "\033[0;91m"
bright_green = "\033[0;92m"
bright_yellow = "\033[0;93m"
bright_blue = "\033[0;94m"
bright_magenta = "\033[0;95m"
bright_cyan = "\033[0;96m"
bright_white = "\033[0;97m"
# to use them:
print(red+"Hello")
# you can also have multiple colors:
print(red+"Hel"+bright_blue+"lo")
# and you can even use it with the slowPrint I mentioned earlier!
Output:
And you can have underline and italic:
reset = "\u001b[0m"
underline = "\033[4m"
italic = "\033[3m"
# to use it:
print(italic+"Hello "+reset+" there "+underline+"Mister!")
# the reset is for taking away all changes you've made to the text
# it makes the text back to the default color and text decorations.
Output:
Links: Sources and Good Websites
Sources:
Always good to use a bit of help from here and there!
W3 Schools: https://www.w3schools.com/python/default.asp
Wikipedia: https://en.wikipedia.org/wiki/Guido_van_Rossum
Wikipedia: https://en.wikipedia.org/wiki/ANSI_escape_code
https://www.python-course.eu/python3_functions.php#:~:text=A%20return%20statement%20ends%20the,special%20value%20None%20is%20returned.
Good Websites you can use:
Official website: https://www.python.org/
W3 Schools: https://www.w3schools.com/python/default.asp
https://www.tutorialspoint.com/python/index.htm
https://realpython.com/
Interactive:
Goodbye World!: End
Well, I guess this is the end. I hope y'all have learnt something new/interesting! If you have any questions, please comment and I will try my best to answer them.
|
真實的應用程式具有真實的資料和真實的巢狀資料。 物件列表內的物件,物件內的物件。
在進行巢狀資料走訪方式是否會遇到兩個問題
程式碼可讀性差
發生錯誤時很難 Debug
glom 是一種處理真實世界資料的強大新方法,其特點是:
基於路徑的訪問巢狀資料結構
可讀,有意義的錯誤消息
使用輕量級 Pythonic 規範進行宣告式資料轉換
內建的資料探索和除錯功能
範例
沒有使用 glom
>>> data = {'a': {'b': {'c': 'd'}}}
>>> data['a']['b']['c']
'd'
>>> data2 = {'a': {'b': None}}
>>> data2['a']['b']['c']
Traceback (most recent call last):
...
TypeError: 'NoneType' object is not subscriptable
使用了 glom,程式碼看起來可讀性變高並且變簡潔了!而且錯誤訊息也比較容易 Debug
>>> glom(data, 'a.b.c')
'd'
>>> glom(data2, 'a.b.c')
Traceback (most recent call last):
...
PathAccessError: could not access 'c', index 2 in path Path('a', 'b', 'c'), got error: ...
Contents
|
Is it intended behavior that hMailServer will not add the SpamAssassin score to its own score unless the SpamAssassin score is greater than or equal to the SpamAssassin spam threshold (i.e., SpamAssassin tags it as spam)? I've seen some discussions on this and it is eventually just dropped and the people usually just lower their SpamAssassin threshold score to make hMailServer always add the scores together (thus making SpamAssassin tag virtually everything as spam). It seems more logical to always add the scores together (or at least give us the option). Is this by design or is it a bug? Are there people who actually want it to work that way it currently is?
Thanks,
Chad
Thanks,
Chad
If SpamAssassin tags mail as spam, does that change the message?
(I don't use SpamAssassin)
I'm also guessing that the spam score may NOT be relayed to hMailserver by SpamAssassin unless the message is marked as SPAM
(I don't use SpamAssassin)
I'm also guessing that the spam score may NOT be relayed to hMailserver by SpamAssassin unless the message is marked as SPAM
Just 'cause I link to a page and say little else doesn't mean I am not being nice.
https://www.hmailserver.com/documentation
https://www.hmailserver.com/documentation
When SpamAssassin scores it above the configured threshold, it will add an X-Spam-Status: YES header (along with other informative headers). I downloaded the hMailServer source code and verified that it does not store the SpamAssassin score (and thus pass it back up to main spam handling routine) unless it finds the X-Spam-Status: YES header. Unless most people really like this behavior, I propose that it always count the score regardless of the X-Spam-Status value. In fact, I feel like that is the whole point of scoring....you keep testing and keep adding up scores until your ultimate threshold is reached. In my particular case, SpamAssassin gives a score of 4.9 (where 5.0 is the SpamAssassin threshold) and then hMailServer failed the SPF test which I score as 5. The total score should have been 9.9, but hMailServer just scored it as 5. My delete threshold is 9, so the mail should have been deleted but it wasn't.
So if you had set your SpamAssassin score at 1, the actual score value would have been passed, and the message rejected.
From what you are saying, the down side to setting the SpamAssassin mark score to 1 is that some mail that doesn't reach the hmailsevrer spam score will contain a spamAssassin header showing a SPamAssassin score?
Is that such a big deal?
From what you are saying, the down side to setting the SpamAssassin mark score to 1 is that some mail that doesn't reach the hmailsevrer spam score will contain a spamAssassin header showing a SPamAssassin score?
Is that such a big deal?
Just 'cause I link to a page and say little else doesn't mean I am not being nice.
https://www.hmailserver.com/documentation
https://www.hmailserver.com/documentation
In my opinion I cant see why you would want to add SA score unless SA has deemed it potential spam.
By using SA you are trusting it and iuts rules to make a judgement: as per your SA configuration, a mail is either deemed as Spam or it is deemed as safe.
You kow that SA rules add scores in decimal and as negatives such as
0.7
-0.2
0.5
1.3
-1.3
and as such it collectively determines its spam value. Furthermore it does this because the rules and its abilities are FAR MORE advanced than any HMS does internally.
By using the SA score if only appropriate when you think SA may have determined as spam (I have the threshold set at 3 which seems to be just right) and yet you want to not FULLY trust it and add HMS tests on to it too. ie, maybe SA scores 3.1 (determining as SPAM), and you have your own HMS threshold set to something higher (5 or 6) allowing for your own HMS tests. Of course this your own HMS test scoring is probably not as fine tuned as the SA scoring rules so is more brute force.
The alternative you are proposing is to say that even though SA thinks a mail is not spam (because its only scored 1.0), you are going to use this '1' and add it to you own HMS test; well what happens if the SA 'HELO' yest is the score of 1, and then you run HMS 'HELO' test scoring 4 (same test yet scored twice?) You now deem your mail as spam (acheiving 5) when in reality both SA and HMS hasnt REALLY found it as spam at all. Whereas using the existing method the mail, even though itis being tested twice for the same condition, still isnt deemed as spam (SA scores it 1.0, hms scores it 4 - and yet your HMS score threshold is 5).
My point is that it is right in my mind to perform the way it does (only when SA considers it as spam) when you have already decided to trust SA's decision making and judgement.
By using SA you are trusting it and iuts rules to make a judgement: as per your SA configuration, a mail is either deemed as Spam or it is deemed as safe.
You kow that SA rules add scores in decimal and as negatives such as
0.7
-0.2
0.5
1.3
-1.3
and as such it collectively determines its spam value. Furthermore it does this because the rules and its abilities are FAR MORE advanced than any HMS does internally.
By using the SA score if only appropriate when you think SA may have determined as spam (I have the threshold set at 3 which seems to be just right) and yet you want to not FULLY trust it and add HMS tests on to it too. ie, maybe SA scores 3.1 (determining as SPAM), and you have your own HMS threshold set to something higher (5 or 6) allowing for your own HMS tests. Of course this your own HMS test scoring is probably not as fine tuned as the SA scoring rules so is more brute force.
The alternative you are proposing is to say that even though SA thinks a mail is not spam (because its only scored 1.0), you are going to use this '1' and add it to you own HMS test; well what happens if the SA 'HELO' yest is the score of 1, and then you run HMS 'HELO' test scoring 4 (same test yet scored twice?) You now deem your mail as spam (acheiving 5) when in reality both SA and HMS hasnt REALLY found it as spam at all. Whereas using the existing method the mail, even though itis being tested twice for the same condition, still isnt deemed as spam (SA scores it 1.0, hms scores it 4 - and yet your HMS score threshold is 5).
My point is that it is right in my mind to perform the way it does (only when SA considers it as spam) when you have already decided to trust SA's decision making and judgement.
5.7 on test.
AV:
SpamassassinForWindows 3.4.0spamd service
AV:
Clamwin+Clamdservice +sanesecurity defs: https://www.hmailserver.com/forum/viewtopic.php?f=21&t=26829
Agreed.jimimaseye wrote:Furthermore it does this because the rules and its abilities are FAR MORE advanced than any HMS does internally.
Thinking about this, I'd expect that the SpamAssassin score was added irrespective of whether SpamAssassin marked the message as SPAM or not. (How else could the negative values be useful) That's certainly how the GUI looks.
I'd think that NOT doing that is a bug, and tha this should be added to the issue tracker at https://github.com/hmailserver/hmailserver/issues
Just 'cause I link to a page and say little else doesn't mean I am not being nice.
https://www.hmailserver.com/documentation
https://www.hmailserver.com/documentation
To give you an idea Matt, a typical Spamassassin header is added regardless and looks like thiswhere everything from "tests=" are the names of all rules applied and scored due to matching. The spam 'report' then lists them test individually with their scores.
Now, this is a good example: given this particular report scored overall 0.3, how would you have HMS take that score (as it only deals with integer scores)?
And here is another example:
The result of this is is MINUS 4.4 (-4.4). Now if you were to apply your own HMS rules in DNSBL or SURBL (that SA doesnt cover) and even score a match at 5 and 4 (total 9) it would still not hit a HMS threshold of 5 (which you may have set) - despite HMS actually scoring way and above this.
This explains why I belive you should only use SA scores when SA has determined it as spam by hitting ITS threshold.
Code: Select all
X-Spam-Status: No, score=0.3 required=3.0 tests=BAYES_00,
DYN_RDNS_AND_INLINE_IMAGE,HTML_MESSAGE,RDNS_DYNAMIC,SPF_PASS,
T_KAM_HTML_FONT_INVALID autolearn=no autolearn_force=no version=3.4.0
X-Spam-Report:
* -0.0 SPF_PASS SPF: sender matches SPF record
* 0.0 T_KAM_HTML_FONT_INVALID BODY: Test for Invalidly Named or Formatted
* Colors in HTML
* -1.9 BAYES_00 BODY: Bayes spam probability is 0 to 1%
* [score: 0.0000]
* 0.0 HTML_MESSAGE BODY: HTML included in message
* 1.0 RDNS_DYNAMIC Delivered to internal network by host with
* dynamic-looking rDNS
* 1.2 DYN_RDNS_AND_INLINE_IMAGE Contains image, and was sent by dynamic
* rDNS
*
Now, this is a good example: given this particular report scored overall 0.3, how would you have HMS take that score (as it only deals with integer scores)?
And here is another example:
Code: Select all
X-Spam-Status: No, score=-4.4 required=3.0 tests=BAYES_00,DKIM_SIGNED,
DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FROM,HTML_MESSAGE,KHOP_RCVD_TRUST,
RCVD_IN_DNSWL_LOW,RCVD_IN_HOSTKARMA_YE,RCVD_IN_MSPIKE_H3,RCVD_IN_MSPIKE_WL,
SPF_PASS,T_KAM_HTML_FONT_INVALID autolearn=ham autolearn_force=no
version=3.4.0
X-Spam-Report:
* 0.0 RCVD_IN_HOSTKARMA_YE RBL: HostKarma: relay in yellow list (varies)
* [209.85.212.177 listed in hostkarma.junkemailfilter.com]
* 0.0 FREEMAIL_FROM Sender email is commonly abused enduser mail provider
* (sandimy[at]gmail.com)
* -0.7 RCVD_IN_DNSWL_LOW RBL: Sender listed at http://www.dnswl.org/, low
* trust
* [209.85.212.177 listed in list.dnswl.org]
* -0.0 RCVD_IN_MSPIKE_H3 RBL: Good reputation (+3)
* [209.85.212.177 listed in wl.mailspike.net]
* -0.0 SPF_PASS SPF: sender matches SPF record
* 0.0 T_KAM_HTML_FONT_INVALID BODY: Test for Invalidly Named or Formatted
* Colors in HTML
* -1.9 BAYES_00 BODY: Bayes spam probability is 0 to 1%
* [score: 0.0000]
* 0.0 HTML_MESSAGE BODY: HTML included in message
* -0.1 DKIM_VALID_AU Message has a valid DKIM or DK signature from author's
* domain
* -0.1 DKIM_VALID Message has at least one valid DKIM or DK signature
* 0.1 DKIM_SIGNED Message has a DKIM or DK signature, not necessarily
* valid
* -0.0 RCVD_IN_MSPIKE_WL Mailspike good senders
* -1.8 KHOP_RCVD_TRUST DNS-Whitelisted sender is verified
*
This explains why I belive you should only use SA scores when SA has determined it as spam by hitting ITS threshold.
5.7 on test.
AV:
SpamassassinForWindows 3.4.0spamd service
AV:
Clamwin+Clamdservice +sanesecurity defs: https://www.hmailserver.com/forum/viewtopic.php?f=21&t=26829
Mine does... Hint: "URIBL"jimimaseye wrote:... or SURBL (that SA doesnt cover) ...
Code: Select all
X-Spam-Status: Yes, score=44.5 required=3.0 tests=BAYES_99,BAYES_999,
BODY_URI_ONLY,KAM_RBL,KAM_VERY_BLACK_DBL,MSGID_FROM_MTA_HEADER,
RAZOR2_CF_RANGE_51_100,RAZOR2_CF_RANGE_E8_51_100,RAZOR2_CHECK,
RCVD_IN_BL_SPAMCOP_NET,RCVD_IN_BRBL_LASTEXT,RCVD_IN_MSPIKE_BL,
RCVD_IN_MSPIKE_L5,RCVD_IN_PBL,RCVD_IN_PSBL,RCVD_IN_RP_RNBL,RCVD_IN_SORBS_WEB,
RCVD_IN_XBL,RCVD_NUMERIC_HELO,TVD_RCVD_IP,TVD_RCVD_IP4,T_FSL_HELO_BARE_IP_2,
URIBL_AB_SURBL,URIBL_BLACK,URIBL_DBL_SPAM,URIBL_JP_SURBL,URIBL_SBL,
URIBL_SBL_A,URIBL_SC_SURBL,URIBL_WS_SURBL autolearn=disabled version=3.4.0
X-Spam-Report:
* 0.6 URIBL_SC_SURBL Contains an URL listed in the SC SURBL blocklist
* [URIs: hotdrugsstore.in]
* 1.3 URIBL_JP_SURBL Contains an URL listed in the JP SURBL blocklist
* [URIs: hotdrugsstore.in]
* 4.5 URIBL_AB_SURBL Contains an URL listed in the AB SURBL blocklist
* [URIs: hotdrugsstore.in]
* 1.6 URIBL_WS_SURBL Contains an URL listed in the WS SURBL blocklist
* [URIs: hotdrugsstore.in]
* 3.3 RCVD_IN_PBL RBL: Received via a relay in Spamhaus PBL
* [109.135.11.38 listed in zen.spamhaus.org]
* 0.4 RCVD_IN_XBL RBL: Received via a relay in Spamhaus XBL
* 3.5 BAYES_99 BODY: Bayes spam probability is 99 to 100%
* [score: 1.0000]
* 0.0 TVD_RCVD_IP Message was received from an IP address
* 0.0 TVD_RCVD_IP4 Message was received from an IPv4 address
* 1.2 RCVD_NUMERIC_HELO Received: contains an IP address used for HELO
* 0.2 BAYES_999 BODY: Bayes spam probability is 99.9 to 100%
* [score: 1.0000]
* 1.9 RAZOR2_CF_RANGE_E8_51_100 Razor2 gives engine 8 confidence level
* above 50%
* [cf: 100] * 0.9 RAZOR2_CHECK Listed in Razor2 (http://razor.sf.net/)
* 0.5 RAZOR2_CF_RANGE_51_100 Razor2 gives confidence level above 50%
* [cf: 100]
* 2.5 URIBL_DBL_SPAM Contains a spam URL listed in the DBL blocklist
* [URIs: hotdrugsstore.in]
* 1.7 URIBL_BLACK Contains an URL listed in the URIBL blacklist
* [URIs: hotdrugsstore.in]
* 3.2 RCVD_IN_MSPIKE_L5 RBL: Very bad reputation (-5)
* [109.135.11.38 listed in bl.mailspike.net]
* 1.4 RCVD_IN_BRBL_LASTEXT RBL: No description available.
* [109.135.11.38 listed in bb.barracudacentral.org]
* 2.7 RCVD_IN_PSBL RBL: Received via a relay in PSBL
* [109.135.11.38 listed in psbl.surriel.com]
* 0.8 RCVD_IN_SORBS_WEB RBL: SORBS: sender is an abusable web server
* [109.135.11.38 listed in dnsbl.sorbs.net]
* 1.3 RCVD_IN_RP_RNBL RBL: Relay in RNBL,
* https://senderscore.org/blacklistlookup/
* [109.135.11.38 listed in bl.score.senderscore.com]
* 1.3 RCVD_IN_BL_SPAMCOP_NET RBL: Received via a relay in bl.spamcop.net
* [Blocked - see <http://www.spamcop.net/bl.shtml?109.135.11.38>]
* 0.1 URIBL_SBL_A Contains URL's A record listed in the SBL blocklist
* [URIs: meekly.hotdrugsstore.in]
* 1.6 URIBL_SBL Contains an URL's NS IP listed in the SBL blocklist
* [URIs: meekly.hotdrugsstore.in]
* 2.0 KAM_RBL Higher scores for hitting multiple trusted RBLs
* 0.0 RCVD_IN_MSPIKE_BL Mailspike blacklisted
* 5.0 KAM_VERY_BLACK_DBL Email that hits both URIBL Black and Spamhaus DBL
* 0.0 MSGID_FROM_MTA_HEADER Message-Id was added by a relay
* 0.0 T_FSL_HELO_BARE_IP_2 No description available.
* 1.0 BODY_URI_ONLY Message body is only a URI in one line of text or for
* an image
SørenR.
Word used by programmers when they do not want to explain what they did.
Algorithm(noun.)
Word used by programmers when they do not want to explain what they did.
I meant you could be adding your OWN lookups into HMS (and setting your own scores against positive matches for them) that Spamassassin has not been coded/rule defined to cover. (I know SA covers most of the main/popular ones such as multi.surbl.org etc., but there might be one the user has found that is not covered by SA rules that he chooses to add).SorenR wrote:Mine does... Hint: "URIBL"
5.7 on test.
AV:
SpamassassinForWindows 3.4.0spamd service
AV:
Clamwin+Clamdservice +sanesecurity defs: https://www.hmailserver.com/forum/viewtopic.php?f=21&t=26829
Ah.. My badjimimaseye wrote:I meant you could be adding your OWN lookups into HMS (and setting your own scores against positive matches for them) that Spamassassin has not been coded/rule defined to cover. (I know SA covers most of the main/popular ones such as multi.surbl.org etc., but there might be one the user has found that is not covered by SA rules that he chooses to add).SorenR wrote:Mine does... Hint: "URIBL"
SørenR.
Word used by programmers when they do not want to explain what they did.
Algorithm(noun.)
Word used by programmers when they do not want to explain what they did.
SpamAssassin can be configured to use other URIBL and DNSBL that aren't provide out of the box. I do, in fact, do this. In my configuration, I was assuming that spam scores were all additive and so I disable the DNS and SURBL options in hMailServer so that they would not contribute to double scoring. I have also spent time slowly studying the types of spam we receive and carefully tuning SpamAssassin to my exact needs. I'm somewhat new to hMailServer...coming from using other commercial products (SecurityGateway, SpamTitan, ORF, etc). All of the other commercial products having a continuous running spam score and I have found this to be very logical and effective. I let hMailServer continue to do SPF, HELO command, and sender DNS-MX checks because it is better suited to do so. I feel like those checks combined with the SpamAssassin checks provide very accurate spam checks. In fact, the majority of spams that get through to my system are the ones where SpamAssassin scores below 5 and hMailServer ignores the score (but would have been spam if it didn't because it failed an hMailServer test).
I also feel like hMailServer should change to all floating point scoring like some of the other commercial solutions so that there wouldn't be any integer truncation when adding everything together.
jimimaseye, your example that scored negative is a bit biased. Part of the negative contribution was because the mail is in some trusted whitelist databases. In my experience, you don't find the same IP's and domains both in blacklists AND whitelists....so, while not impossible, it is statistically unlikely the any DNSBL or SURBL hits would have fired for your example.
I also feel like hMailServer should change to all floating point scoring like some of the other commercial solutions so that there wouldn't be any integer truncation when adding everything together.
jimimaseye, your example that scored negative is a bit biased. Part of the negative contribution was because the mail is in some trusted whitelist databases. In my experience, you don't find the same IP's and domains both in blacklists AND whitelists....so, while not impossible, it is statistically unlikely the any DNSBL or SURBL hits would have fired for your example.
For me, the functionality seems logical to me (as I tried to explain above). The spamassassin score can be 'used', or simply the spam=yes/no and your own score applied.
I suspect it was intended to use EITHER spamassassin scores ONLY thereby leaving all decision making up to SA and not adding to by your own rules, OR you use SA conclusion ("spam=yes") and score it yourself along with having your own HMS testing (HELO, SPF, DNSBL etc). It seems that using SA score and then adding HMS test score to it isnt what it was intended for. After all, why would you ask SA to test for something, then do exactly the same test again in HMS (effectively doubling up the scoring probability) which is a scenario that is VERY possible (we have already identified the Spamassassin does the SPF, HELO, mainstream DNSBL and SURBL tests anyway - so why ask HMS to double check and double the points if you are choosing to accept SA scoring?)
Ideally, you would simply 'USE SPAMASSASSIN SCORING=YES', set your threshold as such (as you do in SA) and leave all HMS scoring and testing turned off (with exception of you having a specific surbl/dnsbl test that you KNOW your SA rules are not covering).
I suspect it was intended to use EITHER spamassassin scores ONLY thereby leaving all decision making up to SA and not adding to by your own rules, OR you use SA conclusion ("spam=yes") and score it yourself along with having your own HMS testing (HELO, SPF, DNSBL etc). It seems that using SA score and then adding HMS test score to it isnt what it was intended for. After all, why would you ask SA to test for something, then do exactly the same test again in HMS (effectively doubling up the scoring probability) which is a scenario that is VERY possible (we have already identified the Spamassassin does the SPF, HELO, mainstream DNSBL and SURBL tests anyway - so why ask HMS to double check and double the points if you are choosing to accept SA scoring?)
Ideally, you would simply 'USE SPAMASSASSIN SCORING=YES', set your threshold as such (as you do in SA) and leave all HMS scoring and testing turned off (with exception of you having a specific surbl/dnsbl test that you KNOW your SA rules are not covering).
5.7 on test.
AV:
SpamassassinForWindows 3.4.0spamd service
AV:
Clamwin+Clamdservice +sanesecurity defs: https://www.hmailserver.com/forum/viewtopic.php?f=21&t=26829
Spam scoring is a regional thing, I don't see the same SPAM as everyone else thus my rules should ideally be different and scored differently.
Unless you are a SpamAssassin expert (or sufficiently nerdy) you can create your own rules to grade scoring to match your environment - most choose not to and rely on pre-built rules that are updated based on a world average of SPAM, simply based on lack of time to maintain those rules. SPAM is evolving all the time and so should the rules to catch it.
Most spammers break rules to deliver SPAM in the most efficient way possible - because someone is paying them - and as we all know; Time is Money.
- GreyListing is a powerfull tool - This is where "Time is Money" become important.
- SPF
- DKIM make everything just a bit more reliable.
- HELO is unfortunately not a reliable way to identify spammers as more home users are "in-sourcing" their mail systems.
- MX... Well, this is where WE can break the rules. It is not a requirement to have MX records according to the RFC's. But, we believe any respectable IT department would have them, if for nothing else than to fight SPAM with a blackhole.MX setup.
- RBL's and SURBL's is a matter of choice. Find one (or more) you trust and "get an opinion from a trusted source".
SpamAssassin will do nearly
Add it all together, and we'll get a pretty good picture of what is SPAM, and what is HAM.
For my part I rate everything as 3, SpamAssassin triggers at 3.0. Anything 3 or above is marked as SPAM, moved into the users SPAM folder, and forwarded to a dedicated SPAM user for further analysis - if needed. Only SPAM scored above 100 is deleted/rejected.
False-positives are added to a hMail rule-based whitelist to prevent them being treated as SPAM - however they will still be marked as SPAM.
Each users SPAM folder and INBOX folder is processed every night to maintain the Bayesian database used by SpamAssassin. The intended rationale is to "localize" SpamAssassin to my neck of the woods. Also the users are able to partly influence classification by moving emails between the two folders for processing the next night (actually, they have a 30 day window).
Despite all efforts I do get SPAM that only SpamAssassin catch... Spammers are getting increasingly cleverer.
As I posted earlier...
I did a quick search on my server for the highest SPAM score in the last 6 months... 66.2 and it passed ALL of the other SPAM tests in hMailServer (Greylist, SPF, DKIM, HELO, MX, 4 RBL's and SURBL), except SpamAssassin ...
Unless you are a SpamAssassin expert (or sufficiently nerdy) you can create your own rules to grade scoring to match your environment - most choose not to and rely on pre-built rules that are updated based on a world average of SPAM, simply based on lack of time to maintain those rules. SPAM is evolving all the time and so should the rules to catch it.
Most spammers break rules to deliver SPAM in the most efficient way possible - because someone is paying them - and as we all know; Time is Money.
- GreyListing is a powerfull tool - This is where "Time is Money" become important.
- SPF
WASa powerfull tool, it's use is increasing amongst spammers.
- DKIM make everything just a bit more reliable.
- HELO is unfortunately not a reliable way to identify spammers as more home users are "in-sourcing" their mail systems.
- MX... Well, this is where WE can break the rules. It is not a requirement to have MX records according to the RFC's. But, we believe any respectable IT department would have them, if for nothing else than to fight SPAM with a blackhole.MX setup.
- RBL's and SURBL's is a matter of choice. Find one (or more) you trust and "get an opinion from a trusted source".
SpamAssassin will do nearly
all of the above, maybe not with the scoring we'd like to use, but then we can add them to hMailServer ourselves. It's like fine tuning SpamAssassin, outside of SpamAssassin.
Add it all together, and we'll get a pretty good picture of what is SPAM, and what is HAM.
For my part I rate everything as 3, SpamAssassin triggers at 3.0. Anything 3 or above is marked as SPAM, moved into the users SPAM folder, and forwarded to a dedicated SPAM user for further analysis - if needed. Only SPAM scored above 100 is deleted/rejected.
False-positives are added to a hMail rule-based whitelist to prevent them being treated as SPAM - however they will still be marked as SPAM.
Each users SPAM folder and INBOX folder is processed every night to maintain the Bayesian database used by SpamAssassin. The intended rationale is to "localize" SpamAssassin to my neck of the woods. Also the users are able to partly influence classification by moving emails between the two folders for processing the next night (actually, they have a 30 day window).
Despite all efforts I do get SPAM that only SpamAssassin catch... Spammers are getting increasingly cleverer.
As I posted earlier...
I did a quick search on my server for the highest SPAM score in the last 6 months... 66.2 and it passed ALL of the other SPAM tests in hMailServer (Greylist, SPF, DKIM, HELO, MX, 4 RBL's and SURBL), except SpamAssassin ...
SørenR.
Word used by programmers when they do not want to explain what they did.
Algorithm(noun.)
Word used by programmers when they do not want to explain what they did.
Just for info, my SA has a mark threshold of 3 and that's when mail gets marked as [SPAM].
Anything that reaches HMS over a score of 7 is automatically deleted - remaining unseen and unknown (technically not true - it gets moved to Trash folder straight away by a rule, so viewable IF you want to go there and see it).
I would say that 98% (if not more) of my mail comes in clean and unmarked or deleted as definite spam correctly. The other 2% gets marked as spam (by SA) but remains genuine (usually because the mail comes in sent with full capitals subjects and/or body content - spamassassin dont like that.)
Anything that reaches HMS over a score of 7 is automatically deleted - remaining unseen and unknown (technically not true - it gets moved to Trash folder straight away by a rule, so viewable IF you want to go there and see it).
I would say that 98% (if not more) of my mail comes in clean and unmarked or deleted as definite spam correctly. The other 2% gets marked as spam (by SA) but remains genuine (usually because the mail comes in sent with full capitals subjects and/or body content - spamassassin dont like that.)
5.7 on test.
AV:
SpamassassinForWindows 3.4.0spamd service
AV:
Clamwin+Clamdservice +sanesecurity defs: https://www.hmailserver.com/forum/viewtopic.php?f=21&t=26829
Spam fighting is an art and a never ending battle. I feel like all available spam technologies should be employed. I like the design of being able to deploy anti-spam technologies in a score fashion and add everything together to make the final decision.
Not too many people have chimed in one way or the other. If I am the only one who really feels strongly that all methods should add together to one final score, then I'll just modify the source to behave like I want. After inspecting the source, it seems this change would be rather easy to make. I was really hoping the developer and community would feel as strongly as I do...as I hate maintaining a forked project.
It looks as though I could also write a script to grab the SpamAssassin score and the HMS score and add them together myself in the situations where HMS doesn't add them itself.
Not too many people have chimed in one way or the other. If I am the only one who really feels strongly that all methods should add together to one final score, then I'll just modify the source to behave like I want. After inspecting the source, it seems this change would be rather easy to make. I was really hoping the developer and community would feel as strongly as I do...as I hate maintaining a forked project.
It looks as though I could also write a script to grab the SpamAssassin score and the HMS score and add them together myself in the situations where HMS doesn't add them itself.
Martin doesn't spend a lot of time any more on the forummattg wrote:I'd think that NOT doing that is a bug, and that this should be added to the issue tracker at https://github.com/hmailserver/hmailserver/issues
Just 'cause I link to a page and say little else doesn't mean I am not being nice.
https://www.hmailserver.com/documentation
https://www.hmailserver.com/documentation
If you do change the source code, you could try submitting it to Martin for review. You may get lucky and have it included in the release.superman20 wrote:Spam fighting is an art and a never ending battle. I feel like all available spam technologies should be employed. I like the design of being able to deploy anti-spam technologies in a score fashion and add everything together to make the final decision.
Not too many people have chimed in one way or the other. If I am the only one who really feels strongly that all methods should add together to one final score, then I'll just modify the source to behave like I want. After inspecting the source, it seems this change would be rather easy to make. I was really hoping the developer and community would feel as strongly as I do...as I hate maintaining a forked project.
It looks as though I could also write a script to grab the SpamAssassin score and the HMS score and add them together myself in the situations where HMS doesn't add them itself.
Scripting is quite easy. I have my Backup-MX hosted with my ISP and they use a round-robin approach to DNS so the HELO check fails on 2 of 3 rDNS lookups also the DKIM check fails for obvious reasons, so I rewrite/recalculate the "X-hMailServer-Spam" and "X-hMailServer-Reason-Score" headers in those cases.
SørenR.
Word used by programmers when they do not want to explain what they did.
Algorithm(noun.)
Word used by programmers when they do not want to explain what they did.
(Sorry I missed this and didnt answer earlier.)
There is nothing unlikely about this scenario for people that are using such geo-blocking (as I am).
In my example, I showed a scenario where a message ended up as MINUS score. Now, lets say that it was sent from china (it wssnt but it could have been). Still genuine, still allowed, not technically spam (hence its score). BUT....I have a DNSBL rule (zz.countries.nerd.dk) that scores anything coming from china a value of 8 which would be enough to reject this email due to hitting my 'delete' threshold of 8 (because I dont want anything from china). Any yet, in this example CLEARLY it would have been allowed in because -4.4+8 is only 3.6 = FAILED.superman20 wrote:jimimaseye, your example that scored negative is a bit biased. Part of the negative contribution was because the mail is in some trusted whitelist databases. In my experience, you don't find the same IP's and domains both in blacklists AND whitelists....so, while not impossible, it is statistically unlikely the any DNSBL or SURBL hits would have fired for your example.
There is nothing unlikely about this scenario for people that are using such geo-blocking (as I am).
5.7 on test.
AV:
SpamassassinForWindows 3.4.0spamd service
AV:
Clamwin+Clamdservice +sanesecurity defs: https://www.hmailserver.com/forum/viewtopic.php?f=21&t=26829
jimimaseye, I certainly appreciate the point you're trying to make, but your new example is a bit contradictory. You have negative points because your e-mail hits some whitelists and some positive points because the e-mail hits some blacklists. I don't think any spam configuration would properly deal with that sort of conflicting information. I actually implement your example somewhat but deal with it differently. My settings have e-mail that is geo-located from China to automatically score the reject/delete score...BUT I also make sure that these custom "extreme" rules are run first and when they hit then everything else is short-circuited. This prevents me from allowing a legitimate e-mail from getting any negative points when I want all China e-mail blocked (good or bad).
As is my case. And I dont need any special 'coding'/methods to ensure everything else is shortcircuited as this is just how things work currently. As my geoblock DNSBL is in the HMS and would hit the threshold it then gets rejected immediately and not passed to SA (delivery refused). However, your earlier suggestion is that everything should be added together so logically you wouldnt be able to shortcut because after the HMS performs its internal checks it would HAVE to call SA and get its scoring before it can concluded and react on its final scoring.superman20 wrote:BUT I also make sure that these custom "extreme" rules are run first and when they hit then everything else is short-circuited.
You cant have it both ways.
5.7 on test.
AV:
SpamassassinForWindows 3.4.0spamd service
AV:
Clamwin+Clamdservice +sanesecurity defs: https://www.hmailserver.com/forum/viewtopic.php?f=21&t=26829
You can somewhat have it both ways if you have spam systems that works together and not independently. Spam checking can definitely stop as soon as the delete threshold is reached. So if the HMS implemented checks hit the delete threshold, then there is no need to call any other checks. However, you must keep going down the chain calling all checks until the delete threshold is reached. Spam testing will never be absolute which is why I strongly feel that it must always be additive. You are adding probabilities and confidence levels that something is spam. If your spam level is 5 and HMS scores 4 and SpamAssassin scores 4 (and assuming a sensible setup where there are no redundant tests), then each one independently says NOT spam, but I'd be willing to bet that it is spam in almost all of those situations.
the problem is that you are suggesting that two totally separate system, each with their own intensities and complexities (with SA being WAY more advanced than HMS), are somewhat 'collated' and the scores shared despite one being the little runt of the spam checking fraternity wholst the other one being the guru. Even if HMS scoring would allow negative scoring, it would make a LITTLE (just) more advanced and more like SA capabilities but it doesnt (SA realises there are positives, and then reasons to double check to apply negatives to counteract - something that HMS spam checking doesnt).
I maintain the two are TOTALLY separate systems (one being written and designed by HMS author and the other being created and written by unrelated entities who HMS author has no control over), it was designed to be that way (use on or the other but not both (although it wont stop you)) and was designed that way for a reason (recognition that SA will do the job a LOT better than HMS can ever dream of). Using ACTUAL SA scores and adding them to HMS scoring of internal checks wouldnt make sense because SA's idea of what scoring values work and what they should be are coded, tested, retested, fine-tuned, modified and implemented after a retest again. HMS scoring is simply (usually) choose a number, an INTEGER only at that, and apply it. Example, SA SPF fail check might score only 0.2 whereas in HMS its default is 2 or 3. Well how can that work together? Still, it will let you but only once it has taken the advice from the guru of spam-checking (Spamassassin) to decide on whether there is any REAL threat.
Thats my view anyway.
I maintain the two are TOTALLY separate systems (one being written and designed by HMS author and the other being created and written by unrelated entities who HMS author has no control over), it was designed to be that way (use on or the other but not both (although it wont stop you)) and was designed that way for a reason (recognition that SA will do the job a LOT better than HMS can ever dream of). Using ACTUAL SA scores and adding them to HMS scoring of internal checks wouldnt make sense because SA's idea of what scoring values work and what they should be are coded, tested, retested, fine-tuned, modified and implemented after a retest again. HMS scoring is simply (usually) choose a number, an INTEGER only at that, and apply it. Example, SA SPF fail check might score only 0.2 whereas in HMS its default is 2 or 3. Well how can that work together? Still, it will let you but only once it has taken the advice from the guru of spam-checking (Spamassassin) to decide on whether there is any REAL threat.
Thats my view anyway.
5.7 on test.
AV:
SpamassassinForWindows 3.4.0spamd service
AV:
Clamwin+Clamdservice +sanesecurity defs: https://www.hmailserver.com/forum/viewtopic.php?f=21&t=26829
Just using some quiet time to implement SpamAssassin
New Windows 10 Pro machine. Enabled HyperV Server and created a new Ubuntu Server install (running on one core and 512 MB RAM) to run SpamAssassin, ClamAV as per >> viewtopic.php?f=21&t=29053
I've found SA marks far lower than I would like.
Playing with SA rules is deep nerdy stuff. I don't want to go and re-score all tests, and potentially break updates, so I have created a new SA rule that simply adds 2.2 to all SA scores. I have set SA to mark as spam if 3 or higher, without changing the subject.
My existing hMailserver AntiSPAM was working pretty good, with a mark at 5 and delete at 60
I've added ClamAV scores including the Sane-Security databases to SA. Currently Mail gets scanned twice by ClamAV, once to score, and then to categorically detect virus - I am watching to see how that works out.
Also using lots of additional databases and filters that I have found here.
Still looking for a good GeoIP addition for Spamassassin.
Still fine-tuning, but catching more SPAM without catching more HAM
New Windows 10 Pro machine. Enabled HyperV Server and created a new Ubuntu Server install (running on one core and 512 MB RAM) to run SpamAssassin, ClamAV as per >> viewtopic.php?f=21&t=29053
I've found SA marks far lower than I would like.
Playing with SA rules is deep nerdy stuff. I don't want to go and re-score all tests, and potentially break updates, so I have created a new SA rule that simply adds 2.2 to all SA scores. I have set SA to mark as spam if 3 or higher, without changing the subject.
My existing hMailserver AntiSPAM was working pretty good, with a mark at 5 and delete at 60
I've added ClamAV scores including the Sane-Security databases to SA. Currently Mail gets scanned twice by ClamAV, once to score, and then to categorically detect virus - I am watching to see how that works out.
Also using lots of additional databases and filters that I have found here.
Still looking for a good GeoIP addition for Spamassassin.
Still fine-tuning, but catching more SPAM without catching more HAM
Just 'cause I link to a page and say little else doesn't mean I am not being nice.
https://www.hmailserver.com/documentation
https://www.hmailserver.com/documentation
@mattg
A nice source of DNB's lists ready to use with SpamAssassin is listed here:
http://www.intra2net.com/en/support/antispam/index.php
A nice source of DNB's lists ready to use with SpamAssassin is listed here:
http://www.intra2net.com/en/support/antispam/index.php
CIDR to RegEx:
DNS Lookup:
DNSBL Lookup:
GEOIP Lookup:
d-fault.nl/CIDRtoRegEx
DNS Lookup:
d-fault.nl/DNSTools
DNSBL Lookup:
d-fault.nl/DNSBLLookup
GEOIP Lookup:
d-fault.nl/GeoipLookup
Could someone post a copy of your spamassassin local.cf with your preferred rules to allow spamassassin do all of the dnsbl and uribl tests . I would like to move all the spam test to spamassassin for better implementation of the scoring and remove it out of hmailserver. It is very confusing when the 2 scoring systems either do not add together or counter each other. I say let one system score for spam and maybe hmailserver do the early spf and dns tests unless spamassassin can do those as well. Not being able to fine tune hmailserver except in whole number integers also skews the scoring. 4.9 is truncated to 4. Thank you
My setup is here: viewtopic.php?f=21&t=28133 (personal settings are in the 2nd post). You will see I simply set a 'tagged by SA' as 5 in line with the builtin antispam tests. (You dont have to use SA's scoring system).
You can simply exclusive use SA if you wish by just disabling any builtin antispam tests (DNSBL's etc). I personally (as you will see) use a combination of both. SA does a far better job of antispam testing so you can be confident that in the main if HMS would find it then SA has already found it...and then some.
You can simply exclusive use SA if you wish by just disabling any builtin antispam tests (DNSBL's etc). I personally (as you will see) use a combination of both. SA does a far better job of antispam testing so you can be confident that in the main if HMS would find it then SA has already found it...and then some.
5.7 on test.
AV:
SpamassassinForWindows 3.4.0spamd service
AV:
Clamwin+Clamdservice +sanesecurity defs: https://www.hmailserver.com/forum/viewtopic.php?f=21&t=26829
Thank you for your post. I had seen that setup but I was hoping that there would be a way of just controlling the dnsbl and uribl tests in the local.cf or another file without having to get into all the scripting. Not being a programmer, scripting get confusing if you do not use it all the time, at least for me. Is there a way to set it in the local.cf? Thank you for the help.
In c$\SpamAssassin\share\3.004000\updates_spamassassin_org you will find two files;
The clever thing with SpamAssassin is that it reads all config files alphabetically ... So if you copy these two files to c$\SpamAssassin\etc\spamassassin and name them
Anyways, take a look at the files and you'll get the idea how to build your own lists.
***** Example *****
I have a config (KAM.cf) about 288 kb... There is this one rule ...
that check the entire message incl. attachments. If someone send an email to me with a PDF file in it, it usually takes 300+ seconds and then hMail fails.
I have created an extra config (KAM-fix.cf) containing only this rule
where "full" is replaced with "body" in line 4. Since KAM.cf is read first and then KAM-fix.cf, it changes the rule. Now everything passes in less than 10 seconds. - And I don't have to create a script to alter the file every time it is auto-updated.
20_dnsbl_tests.cfand25_uribl.cf
The clever thing with SpamAssassin is that it reads all config files alphabetically ... So if you copy these two files to c$\SpamAssassin\etc\spamassassin and name them
my_dnsbl_tests.cfandmy_uribl.cfyou can modify them all you want or change the score as they are read AFTER the originals.
Anyways, take a look at the files and you'll get the idea how to build your own lists.
***** Example *****
I have a config (KAM.cf) about 288 kb... There is this one rule ...
Code: Select all
#Bad UTF-8 content type and transfer encoding - Thanks to Pedro David Marco for alerting to issue
header __KAM_BAD_UTF8_1 Content-Type =~ /text\/html; charset=\"utf-8\"/i
header __KAM_BAD_UTF8_2 Content-Transfer-Encoding =~ /base64/i
full __RW_BAD_UTF8_3 /^(?:[^\n]|\n(?!\n))*\nContent-Transfer-Encoding:\s+base64(?:[^\n]|\n(?!\n))*\n\n[\s\n]{0,300}[^\s\n].{0,300}[^a-z0-9+\/=\n][^\s\n]/si
meta KAM_BAD_UTF8 (__KAM_BAD_UTF8_1 + __KAM_BAD_UTF8_2 + __RW_BAD_UTF8_3 >= 3)
score KAM_BAD_UTF8 14.0
describe KAM_BAD_UTF8 Bad Content Type and Transfer Encoding that attempts to evade SA scanning
I have created an extra config (KAM-fix.cf) containing only this rule
Code: Select all
#Bad UTF-8 content type and transfer encoding - Thanks to Pedro David Marco for alerting to issue
header __KAM_BAD_UTF8_1 Content-Type =~ /text\/html; charset=\"utf-8\"/i
header __KAM_BAD_UTF8_2 Content-Transfer-Encoding =~ /base64/i
body __RW_BAD_UTF8_3 /^(?:[^\n]|\n(?!\n))*\nContent-Transfer-Encoding:\s+base64(?:[^\n]|\n(?!\n))*\n\n[\s\n]{0,300}[^\s\n].{0,300}[^a-z0-9+\/=\n][^\s\n]/si
meta KAM_BAD_UTF8 (__KAM_BAD_UTF8_1 + __KAM_BAD_UTF8_2 + __RW_BAD_UTF8_3 >= 3)
score KAM_BAD_UTF8 14.0
describe KAM_BAD_UTF8 Bad Content Type and Transfer Encoding that attempts to evade SA scanning
SørenR.
Word used by programmers when they do not want to explain what they did.
Algorithm(noun.)
Word used by programmers when they do not want to explain what they did.
In your experience with whitelisting and blacklisting is there an easy manageable way to add a whitelist/blacklists in spamassassin instead of hmailserver? I like hmailserver fine but not easy to manage the whitelist and blocking rules when they grow like mine have since trying to get a handle on all the different ways to stop spam but not stop ham. I end up adding the same line or rule again and again. I am sure in your experiences you have said I think there is an easier way to to implement this or manage this. Thanks in advance.
My SpamAssassin is reasonable well trained after 3 years so I have only a few addresses whitelisted in SpamAssassin.kroberts wrote:In your experience with whitelisting and blacklisting is there an easy manageable way to add a whitelist/blacklists in spamassassin instead of hmailserver? I like hmailserver fine but not easy to manage the whitelist and blocking rules when they grow like mine have since trying to get a handle on all the different ways to stop spam but not stop ham. I end up adding the same line or rule again and again. I am sure in your experiences you have said I think there is an easier way to to implement this or manage this. Thanks in advance.
I don't have a blacklist per se... I block emails on multiple levels of identification; body, from, helo and subject, all done in eventhandlers; OnClientConnect(oClient), OnHELO(oClient) and OnAcceptMessage(oClient, oMessage).
80% of what I block is rejected, the rest is marked as SPAM and my daily SpamAssassin training eventually learn the blacklisted emails so I can clean some of the manual blacklist after about 1 month or so.
I check my custom logs every day and adjust filters if needed. Last time was IIRC 2 weeks ago - and I also built a new IDS function to catch brute force IMAPS logon attempts a few days ago.
SørenR.
Word used by programmers when they do not want to explain what they did.
Algorithm(noun.)
Word used by programmers when they do not want to explain what they did.
Here is a snippet of my custom.cf
(I changed the name so that it wasn't overwritten on SpamAssassin upgrade)
(I changed the name so that it wasn't overwritten on SpamAssassin upgrade)
Code: Select all
# Some shortcircuiting, if the plugin is enabled
#
ifplugin Mail::SpamAssassin::Plugin::Shortcircuit
#
# default: strongly-whitelisted mails are *really* whitelisted now, if the
# shortcircuiting plugin is active, causing early exit to save CPU load.
# Uncomment to turn this on
#
shortcircuit USER_IN_WHITELIST on
# shortcircuit USER_IN_DEF_WHITELIST on
shortcircuit USER_IN_ALL_SPAM_TO on
shortcircuit SUBJECT_IN_WHITELIST on
# the opposite; blacklisted mails can also save CPU
#
shortcircuit USER_IN_BLACKLIST on
# shortcircuit USER_IN_BLACKLIST_TO on
# shortcircuit SUBJECT_IN_BLACKLIST on
# if you have taken the time to correctly specify your "trusted_networks",
# this is another good way to save CPU
#
endif # Mail::SpamAssassin::Plugin::Shortcircuit
# don't score URIBL
score URIBL_BLACK 0
score URIBL_RED 0
score URIBL_GREY 0
score URIBL_BLOCKED 0
# DNSBL scores
score URIBL_DBL_SPAM 4
score RCVD_IN_SBL 3
# blacklist from
blacklist_from *.top
blacklist_from *.eu
blacklist_from *.download
blacklist_from *.accountant
blacklist_from *.cf
blacklist_from *.party
blacklist_from *.review
blacklist_from *.faith
blacklist_from *.win
blacklist_from *.trade
blacklist_from *.webcam
blacklist_from *.racing
blacklist_from *.date
blacklist_from *.bid
blacklist_from *.cricket
# whitelist from
whitelist_from *@important_domain.com.au
## BELOW is my ClamAV Integration
## NOT needed for what you are doing
loadplugin ClamAV clamav.pm
full CLAMAV eval:check_clamav()
describe CLAMAV Clam AntiVirus detected something...
score CLAMAV 0.001
# Look for specific types of ClamAV detections
header __CLAMAV_PHISH X-Spam-Virus =~ /Yes.{1,30}Phishing/i
header __CLAMAV_PHISH_HEUR X-Spam-Virus =~ /Yes.{1,30}Phishing\.Heuristics\.Email/
header __CLAMAV_SANE X-Spam-Virus =~ /Yes.{1,30}Sanesecurity/i
header __CLAMAV_MBL X-Spam-Virus =~ /Yes.{1,30}MBL/
header __CLAMAV_MSRBL X-Spam-Virus =~ /Yes.{1,30}MSRBL/
header __CLAMAV_VX X-Spam-Virus =~ /Yes.{1,30}VX\./
# Give the above rules a very late priority so that they can see the output
# of previous rules - otherwise they don't work! Not sure what the correct
# priority should be but this seems to work...
priority __CLAMAV_PHISH 9999
priority __CLAMAV_PHISH_HEUR 9999
priority __CLAMAV_SANE 9999
priority __CLAMAV_MBL 9999
priority __CLAMAV_MSRBL 9999
priority __CLAMAV_VX 9999
# Work out what ClamAV detected and score accordingly
# ClamAV general signatures
meta CLAMAV_VIRUS (CLAMAV && !__CLAMAV_PHISH && !__CLAMAV_SANE && !__CLAMAV_MBL && !__CLAMAV_MSRBL && !__CLAMAV_VX)
describe CLAMAV_VIRUS Virus found by ClamAV default signatures
score CLAMAV_VIRUS 20.0
# ClamAV phishing signatures
meta CLAMAV_PHISH (CLAMAV && __CLAMAV_PHISH && !__CLAMAV_SANE && !__CLAMAV_PHISH_HEUR)
describe CLAMAV_PHISH Phishing email found by ClamAV default signatures
score CLAMAV_PHISH 10.0
# ClamAV phishing with heuristic engine (not signatures based, may lead to false positives)
# Available since ClamAV 0.91
meta CLAMAV_PHISH_HEUR (CLAMAV && __CLAMAV_PHISH_HEUR)
describe CLAMAV_PHISH_HEUR Phishing email found by ClamAV heuristic engine
score CLAMAV_PHISH_HEUR 2.0
# ClamAV SaneSecurity signatures from http://www.sanesecurity.com/clamav/
meta CLAMAV_SANE (CLAMAV && __CLAMAV_SANE)
describe CLAMAV_SANE SPAM found by ClamAV SaneSecurity signatures
score CLAMAV_SANE 15
# ClamAV MBL signatures from http://www.malware.com.br/
meta CLAMAV_MBL (CLAMAV && __CLAMAV_MBL)
describe CLAMAV_MBL Malware found by ClamAV MBL signatures
score CLAMAV_MBL 7.5
# ClamAV MSRBL signatures from http://www.msrbl.com/
meta CLAMAV_MSRBL (CLAMAV && __CLAMAV_MSRBL)
describe CLAMAV_MSRBL SPAM found by ClamAV MSRBL signatures
score CLAMAV_MSRBL 2.0
# ClamAV SecuriteInfo.com VX malware signatures from
# http://www.securiteinfo.com/services/clamav_unofficial_malwares_signatures.shtml
meta CLAMAV_VX (CLAMAV && __CLAMAV_VX)
describe CLAMAV_VX Malware found by SecuriteInfo.com VX signatures
score CLAMAV_VX 5.0
Just 'cause I link to a page and say little else doesn't mean I am not being nice.
https://www.hmailserver.com/documentation
https://www.hmailserver.com/documentation
mattg wrote:Here is a snippet of my custom.cf
(I changed the name so that it wasn't overwritten on SpamAssassin upgrade)
Code: Select all
# Some shortcircuiting, if the plugin is enabled
#
ifplugin Mail::SpamAssassin::Plugin::Shortcircuit
#
# default: strongly-whitelisted mails are *really* whitelisted now, if the
# shortcircuiting plugin is active, causing early exit to save CPU load.
# Uncomment to turn this on
#
shortcircuit USER_IN_WHITELIST on
# shortcircuit USER_IN_DEF_WHITELIST on
shortcircuit USER_IN_ALL_SPAM_TO on
shortcircuit SUBJECT_IN_WHITELIST on
# the opposite; blacklisted mails can also save CPU
#
shortcircuit USER_IN_BLACKLIST on
# shortcircuit USER_IN_BLACKLIST_TO on
# shortcircuit SUBJECT_IN_BLACKLIST on
# if you have taken the time to correctly specify your "trusted_networks",
# this is another good way to save CPU
#
endif # Mail::SpamAssassin::Plugin::Shortcircuit
# don't score URIBL
score URIBL_BLACK 0
score URIBL_RED 0
score URIBL_GREY 0
score URIBL_BLOCKED 0
# DNSBL scores
score URIBL_DBL_SPAM 4
score RCVD_IN_SBL 3
# blacklist from
blacklist_from *.top
blacklist_from *.eu
blacklist_from *.download
blacklist_from *.accountant
blacklist_from *.cf
blacklist_from *.party
blacklist_from *.review
blacklist_from *.faith
blacklist_from *.win
blacklist_from *.trade
blacklist_from *.webcam
blacklist_from *.racing
blacklist_from *.date
blacklist_from *.bid
blacklist_from *.cricket
# whitelist from
whitelist_from *@important_domain.com.au
## BELOW is my ClamAV Integration
## NOT needed for what you are doing
loadplugin ClamAV clamav.pm
full CLAMAV eval:check_clamav()
describe CLAMAV Clam AntiVirus detected something...
score CLAMAV 0.001
# Look for specific types of ClamAV detections
header __CLAMAV_PHISH X-Spam-Virus =~ /Yes.{1,30}Phishing/i
header __CLAMAV_PHISH_HEUR X-Spam-Virus =~ /Yes.{1,30}Phishing\.Heuristics\.Email/
header __CLAMAV_SANE X-Spam-Virus =~ /Yes.{1,30}Sanesecurity/i
header __CLAMAV_MBL X-Spam-Virus =~ /Yes.{1,30}MBL/
header __CLAMAV_MSRBL X-Spam-Virus =~ /Yes.{1,30}MSRBL/
header __CLAMAV_VX X-Spam-Virus =~ /Yes.{1,30}VX\./
# Give the above rules a very late priority so that they can see the output
# of previous rules - otherwise they don't work! Not sure what the correct
# priority should be but this seems to work...
priority __CLAMAV_PHISH 9999
priority __CLAMAV_PHISH_HEUR 9999
priority __CLAMAV_SANE 9999
priority __CLAMAV_MBL 9999
priority __CLAMAV_MSRBL 9999
priority __CLAMAV_VX 9999
# Work out what ClamAV detected and score accordingly
# ClamAV general signatures
meta CLAMAV_VIRUS (CLAMAV && !__CLAMAV_PHISH && !__CLAMAV_SANE && !__CLAMAV_MBL && !__CLAMAV_MSRBL && !__CLAMAV_VX)
describe CLAMAV_VIRUS Virus found by ClamAV default signatures
score CLAMAV_VIRUS 20.0
# ClamAV phishing signatures
meta CLAMAV_PHISH (CLAMAV && __CLAMAV_PHISH && !__CLAMAV_SANE && !__CLAMAV_PHISH_HEUR)
describe CLAMAV_PHISH Phishing email found by ClamAV default signatures
score CLAMAV_PHISH 10.0
# ClamAV phishing with heuristic engine (not signatures based, may lead to false positives)
# Available since ClamAV 0.91
meta CLAMAV_PHISH_HEUR (CLAMAV && __CLAMAV_PHISH_HEUR)
describe CLAMAV_PHISH_HEUR Phishing email found by ClamAV heuristic engine
score CLAMAV_PHISH_HEUR 2.0
# ClamAV SaneSecurity signatures from http://www.sanesecurity.com/clamav/
meta CLAMAV_SANE (CLAMAV && __CLAMAV_SANE)
describe CLAMAV_SANE SPAM found by ClamAV SaneSecurity signatures
score CLAMAV_SANE 15
# ClamAV MBL signatures from http://www.malware.com.br/
meta CLAMAV_MBL (CLAMAV && __CLAMAV_MBL)
describe CLAMAV_MBL Malware found by ClamAV MBL signatures
score CLAMAV_MBL 7.5
# ClamAV MSRBL signatures from http://www.msrbl.com/
meta CLAMAV_MSRBL (CLAMAV && __CLAMAV_MSRBL)
describe CLAMAV_MSRBL SPAM found by ClamAV MSRBL signatures
score CLAMAV_MSRBL 2.0
# ClamAV SecuriteInfo.com VX malware signatures from
# http://www.securiteinfo.com/services/clamav_unofficial_malwares_signatures.shtml
meta CLAMAV_VX (CLAMAV && __CLAMAV_VX)
describe CLAMAV_VX Malware found by SecuriteInfo.com VX signatures
score CLAMAV_VX 5.0
Hi Matt,
I know it has already passed few months. I want to know on where is the location to put your custom.cf? Is it under user directory (~\.spamassasin)?
Is it ok to put the whitelist_from rules in local.cf?
Yep a few months, and I've changed since then
I have a whitelist.cf for just my whiteliest entries
I have the KAM rule set in KAM.cf >> https://www.pccc.com/downloads/SpamAssassin/contrib/
I have non-KAM rules from same source
I have a zzLast.cf to negate any rules that auto created from the above two lists
I have a blacklist.cf, a matt.cf, a nerds.cf (does the country of origin stuff) and more.
Seems you can have multiple .cf files, and they ALL get read individually
All of these are in my /etc/spamassassin/ folder on my UBUNTU system. I don't use the Jam Software windows variant of SpamAssassin
I have a whitelist.cf for just my whiteliest entries
I have the KAM rule set in KAM.cf >> https://www.pccc.com/downloads/SpamAssassin/contrib/
I have non-KAM rules from same source
I have a zzLast.cf to negate any rules that auto created from the above two lists
I have a blacklist.cf, a matt.cf, a nerds.cf (does the country of origin stuff) and more.
Seems you can have multiple .cf files, and they ALL get read individually
All of these are in my /etc/spamassassin/ folder on my UBUNTU system. I don't use the Jam Software windows variant of SpamAssassin
Just 'cause I link to a page and say little else doesn't mean I am not being nice.
https://www.hmailserver.com/documentation
https://www.hmailserver.com/documentation
Its similar on the windows version. In JAM the local.cf is found in (default)mattg wrote: All of these are in my /etc/spamassassin/ folder on my UBUNTU system. I don't use the Jam Software windows variant of SpamAssassin
. Place other custom .CFs here too.
C:\Program Files\JAM Software\SpamAssassin for Windows\etc\spamassassin
5.7 on test.
AV:
SpamassassinForWindows 3.4.0spamd service
AV:
Clamwin+Clamdservice +sanesecurity defs: https://www.hmailserver.com/forum/viewtopic.php?f=21&t=26829
The file path.config in the SpamAssassin directory will specify locations..jimimaseye wrote:Its similar on the windows version. In JAM the local.cf is found in (default)mattg wrote: All of these are in my /etc/spamassassin/ folder on my UBUNTU system. I don't use the Jam Software windows variant of SpamAssassin. Place other custom .CFs here too.C:\Program Files\JAM Software\SpamAssassin for Windows\etc\spamassassin
Code: Select all
DEF_RULES_DIR=./share/spamassassin
LOCAL_RULES_DIR=./etc/spamassassin
LOCAL_STATE_DIR=./share
SørenR.
Word used by programmers when they do not want to explain what they did.
Algorithm(noun.)
Word used by programmers when they do not want to explain what they did.
|
Python - 65535
http://ideone.com/knKRhn
from math import exp, log
def divnr(p, q):
"""
Integer division p/q using Newton-Raphson Division.
Assumes p > q > 0.
"""
sp = p.bit_length()-1
sq = q.bit_length()-1
sr = sp - sq + 1
s = []
t = sr
while t > 15:
s = [t] + s
t = (t>>1) + 1
# Base-case division
r = (1 << (t<<1)) / (q >> sq-t)
for u in s:
r = (r << u-t+1) - (r*r * (q >> sq-u) >> (t<<1))
t = u
return (r * (p >> sq)) >> sr
def pibs(a, b):
if a == b:
if a == 0:
return (1, 1, 1123)
p = a*(a*(32*a-48)+22)-3
q = a*a*a*24893568
t = 21460*a+1123
return (p, -q, p*t)
m = (a+b) >> 1
p1, q1, t1 = pibs(a, m)
p2, q2, t2 = pibs(m+1, b)
return (p1*p2, q1*q2, q2*t1 + p1*t2)
def ebs(a, b):
if a == b:
if a == 0:
return (1, 1)
return (1, a)
m = (a+b) >> 1
p1, q1 = ebs(a, m)
p2, q2 = ebs(m+1, b)
return (p1*q2+p2, q1*q2)
if __name__ == '__main__':
n = input()
pi_terms = int(n*0.16975227728583067)
# 10^n == e^p
p = n*2.3025850929940457
# Lambert W_0(p/e) a la Newton
k = log(p) - 1
w = k - (k-1)/(k+1)
while k > w:
k = w
w -= (k - p*exp(-k-1))/(k+1)
# InverseGamma(e^p) approximation
e_terms = int(p / w)
pp, pq, pt = pibs(0, pi_terms)
ep, eq = ebs(0, e_terms)
z = 10**n
p = 3528*z*ep*abs(pq)
q = eq*abs(pt)
pie = divnr(p, q)
print pie,
Ideone doesn't seem to have gmpy2 installed, which is unfortunate for at least two reasons. One, because it would make the calcuation a lot faster, and two, because it makes any formula requiring an arbitrary precision square root impractical.
The formula I use for Ï was listed by Ramanujan as Formula (39):
which converges at the rate of ~5.89 digits per term. To my knowledge, this is the fastest converging series of its kind that doesn't require the evaluation of an arbitrary precision square root. Formula (44) in the same paper (convergence rate ~7.98 digits per term) is most often referred to as the Ramanujan Formula.
The formula I use for e is the sum of inverse factorials. The number of terms required is calculated as Î-1(10n), using an approximation I found on mathoverflow. The Lambert W0 component is found using Newton's Method.
The calculation of each of these summations is done via the Fast E-function Evalution (more generally known as binary-splitting), originally devised by Karatsuba. The method reduces a summation to n terms to a single rational value p/q. These two values are then multiplied to produce the final result.
Update:
Profiling revealed that over half of the time needed for the calculation was spent in the final division. Only the upper most log2(10n) bits of q are needed to obtain full precision, so I trim a few off beforehand. The code now fills the Ideone output buffer in 3.33s.
Update 2:
Since this is an optimization challenge, I decided to write my own division routine to combat CPython's slowness. The implementation of divnr above uses Newton-Raphson Division. The general idea is to calculate d = 1/q · 2n using Newton's Method, where n is the number of bits the result requires, and compute the result as p · d >> n. Runtime is now 2.87s - and this is without even chopping off bits before the calculation; it's unnecessary for this method.
|
I want to remove all empty strings from a list of strings in python.
My idea looks like this:
while '' in str_list:
str_list.remove('')
Is there any more pythonic way to do this?
I would use filter:
str_list = filter(None, str_list) # fastest
str_list = filter(bool, str_list) # fastest
str_list = filter(len, str_list) # a bit slower
str_list = filter(lambda item: item, str_list) # slower than list comprehension
Python 3 returns an iterator from filter, so should be wrapped in a call to list()
str_list = list(filter(None, str_list)) # fastest
(etc.)
Tests:
>>> timeit('filter(None, str_list)', 'str_list=["a"]*1000', number=100000)
2.4797441959381104
>>> timeit('filter(bool, str_list)', 'str_list=["a"]*1000', number=100000)
2.4788150787353516
>>> timeit('filter(len, str_list)', 'str_list=["a"]*1000', number=100000)
5.2126238346099854
>>> timeit('[x for x in str_list if x]', 'str_list=["a"]*1000', number=100000)
13.354584932327271
>>> timeit('filter(lambda item: item, str_list)', 'str_list=["a"]*1000', number=100000)
17.427681922912598
|
【百度云搜索,搜各种资料:http://www.bdyss.cn】
【搜网盘,搜各种资料:http://www.swpan.cn】
注意:数据保存的操作都是在pipelines.py文件里操作的
将数据保存为json文件
spider是一个信号检测
# -*- coding: utf-8 -*-
# Define your item pipelines here
#
# Don't forget to add your pipeline to the ITEM_PIPELINES setting
# See: http://doc.scrapy.org/en/latest/topics/item-pipeline.html
from scrapy.pipelines.images import ImagesPipeline #导入图片下载器模块
import codecs
import json
class AdcPipeline(object): #定义数据处理类,必须继承object
def __init__(self):
self.file = codecs.open('shuju.json', 'w', encoding='utf-8') #初始化时打开json文件
def process_item(self, item, spider): #process_item(item)为数据处理函数,接收一个item,item里就是爬虫最后yield item 来的数据对象
# print('文章标题是:' + item['title'][0])
# print('文章缩略图url是:' + item['img'][0])
# print('文章缩略图保存路径是:' + item['img_tplj']) #接收图片下载器填充的,图片下载后的路径
#将数据保存为json文件
lines = json.dumps(dict(item), ensure_ascii=False) + '\n' #将数据对象转换成json格式
self.file.write(lines) #将json格式数据写入文件
return item
def spider_closed(self,spider): #创建一个方法继承spider,spider是一个信号,当前数据操作完成后触发这个方法
self.file.close() #关闭打开文件
class imgPipeline(ImagesPipeline): #自定义一个图片下载内,继承crapy内置的ImagesPipeline图片下载器类
def item_completed(self, results, item, info): #使用ImagesPipeline类里的item_completed()方法获取到图片下载后的保存路径
for ok, value in results:
img_lj = value['path'] #接收图片保存路径
# print(ok)
item['img_tplj'] = img_lj #将图片保存路径填充到items.py里的字段里
return item #将item给items.py 文件的容器函数
#注意:自定义图片下载器设置好后,需要在
将数据保存到数据库
我们使用一个ORM框架sqlalchemy模块,保存数据
数据库操作文件
#!/usr/bin/env python
# -*- coding:utf-8 -*-
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy import Column
from sqlalchemy import Integer, String, TIMESTAMP
from sqlalchemy import ForeignKey, UniqueConstraint, Index
from sqlalchemy.orm import sessionmaker, relationship
from sqlalchemy import create_engine
#配置数据库引擎信息
ENGINE = create_engine("mysql+pymysql://root:279819@127.0.0.1:3306/cshi?charset=utf8", max_overflow=10, echo=True)
Base = declarative_base() #创建一个SQLORM基类
class SendMsg(Base): #设计表
__tablename__ = 'sendmsg'
id = Column(Integer, primary_key=True, autoincrement=True)
title = Column(String(300))
img_tplj = Column(String(300))
def init_db():
Base.metadata.create_all(ENGINE) #向数据库创建指定表
def drop_db():
Base.metadata.drop_all(ENGINE) #向数据库删除指定表
def session():
cls = sessionmaker(bind=ENGINE) #创建sessionmaker类,操作表
return cls()
# drop_db() #删除表
# init_db() #创建表
pipelines.py文件
# -*- coding: utf-8 -*-
# Define your item pipelines here
#
# Don't forget to add your pipeline to the ITEM_PIPELINES setting
# See: http://doc.scrapy.org/en/latest/topics/item-pipeline.html
from scrapy.pipelines.images import ImagesPipeline #导入图片下载器模块
from adc import shujuku as ORM #导入数据库文件
class AdcPipeline(object): #定义数据处理类,必须继承object
def __init__(self):
ORM.init_db() #创建数据库表
def process_item(self, item, spider): #process_item(item)为数据处理函数,接收一个item,item里就是爬虫最后yield item 来的数据对象
print('文章标题是:' + item['title'][0])
print('文章缩略图url是:' + item['img'][0])
print('文章缩略图保存路径是:' + item['img_tplj']) #接收图片下载器填充的,图片下载后的路径
mysq = ORM.session()
shuju = ORM.SendMsg(title=item['title'][0], img_tplj=item['img_tplj'])
mysq.add(shuju)
mysq.commit()
return item
class imgPipeline(ImagesPipeline): #自定义一个图片下载内,继承crapy内置的ImagesPipeline图片下载器类
def item_completed(self, results, item, info): #使用ImagesPipeline类里的item_completed()方法获取到图片下载后的保存路径
for ok, value in results:
img_lj = value['path'] #接收图片保存路径
# print(ok)
item['img_tplj'] = img_lj #将图片保存路径填充到items.py里的字段里
return item #将item给items.py 文件的容器函数
#注意:自定义图片下载器设置好后,需要在
【转载自:http://www.lqkweb.com】
|
I previously asked a question of how to enter this .txt file using pandas. I was trying with pandas.read_csv
What I found is that I can not read this file using read_csv unless I remove the header data (down to the "#").
The problem is, I need to extract data like, Well Name, Well KB, Well Type... from the header data. Is there a way do do this using Pandas? Or will I just have to enter it some other way?
my original question was here:
The original text file:
# WELL TRACE FROM PETREL
# WELL NAME: ZZ-0113
# WELL HEAD X-COORDINATE: 9999999.00000000 (m)
# WELL HEAD Y-COORDINATE: 9999999.00000000 (m)
# WELL KB: 159.00000000 (ft)
# WELL TYPE: OIL
# MD AND TVD ARE REFERENCED (=0) AT KB AND INCREASE DOWNWARDS
# ANGLES ARE GIVEN IN DEGREES
# XYZ TRACE IS GIVEN IN COORDINATE SYSTEM WGS_1924_UTM_Zone_42N
# AZIMUTH REFERENCE TRUE NORTH
# DX DY ARE GIVEN IN GRID NORTH IN m-UNITS
# DEPTH (Z, TVD) GIVEN IN ft-UNITS
#======================================================================================================================================
MD X Y Z TVD DX DY AZIM INCL DLS
#======================================================================================================================================
0.0000000000 999999.00000 9999999.0000 159.00000000 0.0000000000 0.0000005192 -0.000000000 1.3487006929 0.0000000000 0.0000000000
132.00000000 999999.08032 9999999.9116 27.000774702 131.99922530 0.0803153923 -0.088388779 139.08870069 0.3400000000 0.2575757504
221.00000000 999999.19115 9999999.8017 -61.99775149 220.99775149 0.1911487882 -0.198290891 132.93870069 0.3200000000 0.0456726104
|
I am currently trying to get a decent score (> 40% accuracy) with Keras on CIFAR 100. However, I'm experiencing a weird behaviour of a CNN model: It tends to predict some classes (2 - 5) much more often than others:
The pixel at position (i, j) contains the count how many elements of the validation set from class i were predicted to be of class j. Thus the diagonal contains the correct classifications, everything else is an error. The two vertical bars indicate that the model often predicts those classes, although it is not the case.
CIFAR 100 is perfectly balanced: All 100 classes have 500 training samples.
Why does the model tend to predict some classes MUCH more often than other classes? How can this be fixed?
The code
Running this takes a while.
#!/usr/bin/env python
from __future__ import print_function
from keras.datasets import cifar100
from keras.preprocessing.image import ImageDataGenerator
from keras.models import Sequential
from keras.layers import Dense, Dropout, Activation, Flatten
from keras.layers import Convolution2D, MaxPooling2D
from keras.utils import np_utils
from sklearn.model_selection import train_test_split
import numpy as np
batch_size = 32
nb_classes = 100
nb_epoch = 50
data_augmentation = True
# input image dimensions
img_rows, img_cols = 32, 32
# The CIFAR10 images are RGB.
img_channels = 3
# The data, shuffled and split between train and test sets:
(X, y), (X_test, y_test) = cifar100.load_data()
X_train, X_val, y_train, y_val = train_test_split(X, y,
test_size=0.20,
random_state=42)
# Shuffle training data
perm = np.arange(len(X_train))
np.random.shuffle(perm)
X_train = X_train[perm]
y_train = y_train[perm]
print('X_train shape:', X_train.shape)
print(X_train.shape[0], 'train samples')
print(X_val.shape[0], 'validation samples')
print(X_test.shape[0], 'test samples')
# Convert class vectors to binary class matrices.
Y_train = np_utils.to_categorical(y_train, nb_classes)
Y_test = np_utils.to_categorical(y_test, nb_classes)
Y_val = np_utils.to_categorical(y_val, nb_classes)
model = Sequential()
model.add(Convolution2D(32, 3, 3, border_mode='same',
input_shape=X_train.shape[1:]))
model.add(Activation('relu'))
model.add(Convolution2D(32, 3, 3))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.25))
model.add(Convolution2D(64, 3, 3, border_mode='same'))
model.add(Activation('relu'))
model.add(Convolution2D(64, 3, 3))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.25))
model.add(Flatten())
model.add(Dense(1024))
model.add(Activation('tanh'))
model.add(Dropout(0.5))
model.add(Dense(nb_classes))
model.add(Activation('softmax'))
model.compile(loss='categorical_crossentropy',
optimizer='adam',
metrics=['accuracy'])
X_train = X_train.astype('float32')
X_val = X_val.astype('float32')
X_test = X_test.astype('float32')
X_train /= 255
X_val /= 255
X_test /= 255
if not data_augmentation:
print('Not using data augmentation.')
model.fit(X_train, Y_train,
batch_size=batch_size,
nb_epoch=nb_epoch,
validation_data=(X_val, y_val),
shuffle=True)
else:
print('Using real-time data augmentation.')
# This will do preprocessing and realtime data augmentation:
datagen = ImageDataGenerator(
featurewise_center=False, # set input mean to 0 over the dataset
samplewise_center=False, # set each sample mean to 0
featurewise_std_normalization=False, # divide inputs by std of the dataset
samplewise_std_normalization=False, # divide each input by its std
zca_whitening=False, # apply ZCA whitening
rotation_range=0, # randomly rotate images in the range (degrees, 0 to 180)
width_shift_range=0.1, # randomly shift images horizontally (fraction of total width)
height_shift_range=0.1, # randomly shift images vertically (fraction of total height)
horizontal_flip=True, # randomly flip images
vertical_flip=False) # randomly flip images
# Compute quantities required for featurewise normalization
# (std, mean, and principal components if ZCA whitening is applied).
datagen.fit(X_train)
# Fit the model on the batches generated by datagen.flow().
model.fit_generator(datagen.flow(X_train, Y_train,
batch_size=batch_size),
samples_per_epoch=X_train.shape[0],
nb_epoch=nb_epoch,
validation_data=(X_val, Y_val))
model.save('cifar100.h5')
Visualization code
#!/usr/bin/env python
"""Analyze a cifar100 keras model."""
from keras.models import load_model
from keras.datasets import cifar100
from sklearn.model_selection import train_test_split
import numpy as np
import json
import io
import matplotlib.pyplot as plt
try:
to_unicode = unicode
except NameError:
to_unicode = str
n_classes = 100
def plot_cm(cm, zero_diagonal=False):
"""Plot a confusion matrix."""
n = len(cm)
size = int(n / 4.)
fig = plt.figure(figsize=(size, size), dpi=80, )
plt.clf()
ax = fig.add_subplot(111)
ax.set_aspect(1)
res = ax.imshow(np.array(cm), cmap=plt.cm.viridis,
interpolation='nearest')
width, height = cm.shape
fig.colorbar(res)
plt.savefig('confusion_matrix.png', format='png')
# Load model
model = load_model('cifar100.h5')
# Load validation data
(X, y), (X_test, y_test) = cifar100.load_data()
X_train, X_val, y_train, y_val = train_test_split(X, y,
test_size=0.20,
random_state=42)
# Calculate confusion matrix
y_val_i = y_val.flatten()
y_val_pred = model.predict(X_val)
y_val_pred_i = y_val_pred.argmax(1)
cm = np.zeros((n_classes, n_classes), dtype=np.int)
for i, j in zip(y_val_i, y_val_pred_i):
cm[i][j] += 1
acc = sum([cm[i][i] for i in range(100)]) / float(cm.sum())
print("Validation accuracy: %0.4f" % acc)
# Create plot
plot_cm(cm)
# Serialize confusion matrix
with io.open('cm.json', 'w', encoding='utf8') as outfile:
str_ = json.dumps(cm.tolist(),
indent=4, sort_keys=True,
separators=(',', ':'), ensure_ascii=False)
outfile.write(to_unicode(str_))
Red herrings
tanh
I've replaced tanh by relu. The history csv looks ok, but the visualization has the same problem:
Please also note that the validation accuracy here is only 3.44%.
Dropout + tanh + border mode
Removing dropout, replacing tanh by relu, setting border mode to same everywhere: history csv
The visualization code still gives a much lower accuracy (8.50% this time) than the keras training code.
Q & A
The following is a summary of the comments:
The data is evenly distributed over the classes. So there is no "over training" of those two classes.
Data augmentation is used, but without data augmentation the problem persists.
The visualization is not the problem.
|
我遇到了一些麻烦,弄清楚如何处理在顶部或底部碰撞的碰撞。 如何指定这些碰撞?
这是一个小小的代码片段,让您了解我的方法。 现在它在哪里碰撞并不重要。
# the ball has hit one of the paddles. send it back in another direction. if paddleRect.colliderect(ballRect) or paddle2Rect.colliderect(ballRect): ballHitPaddle.play() if directionOfBall == 'upleft': directionOfBall = 'upright' elif directionOfBall == 'upright': directionOfBall = 'upleft' elif directionOfBall == 'downleft': directionOfBall = 'downright' elif directionOfBall == 'downright': directionOfBall = 'downleft'
提前致谢。
**EDIT** Paddle rect: top ____ | | | | | | Sides | | ---- bottom
我需要知道,如果球击中顶部或底部。
基本上你必须看看你用于划桨的矩形的中心与球的矩形的中心之间的角度(也可以是一个点,因为我们把球的矩形折成一个点)。
然后将中心点之间的角度与桨叶角点给出的象限进行比较(这给出了左,右,上,下)。 下面的程序只是通过一些规范化来简化数组的查找。
与您的问题相关的部分实际上是DirRect类。
import pygame import math class DirRect(pygame.Rect): def direction_to_rect(self, drect): ar = math.atan2(self.centery - self.top, self.right - self.centerx) # half of the angle of the right side # construct the corner angles into an array to search for index such that the index indicates direction # this is normalized into [0, 2π] to make searches easier (no negative numbers and stuff) dirint = [ 2*ar, math.pi, math.pi+2*ar, 2*math.pi] # calculate angle towars the center of the other rectangle, + ar for normalization into ad = math.atan2(self.centery - drect.centery, drect.centerx - self.centerx) + ar # again normalization, sincen atan2 ouputs values in the range of [-π,π] if ad < 0: ad = 2*math.pi + ad # search for the quadrant we are in and return it for i in xrange(len(dirint)): if ad < dirint[i]: return i # just in case -1 as error indicator return -1 pygame.init() screen = pygame.display.set_mode([400,400]) screen.fill([255,255,255]) # This is the paddle paddle = DirRect( (150,150),(40,20) ) # colorize directional information colors = [ (255,0,0), #right (0,255,0), #up (0,0,255), #left (0,0,0), #down (255,255,255) # error = -1 ] #show direction for every point dsurf = pygame.display.get_surface() for x in xrange(dsurf.get_width()): for y in xrange(dsurf.get_height()): prect = pygame.Rect((x,y),(0,0)) dsurf.set_at( (x,y), colors[paddle.direction_to_rect(prect)]) #show paddle pygame.draw.rect( dsurf, (128,128,128), paddle, 1) running = True while running: for event in pygame.event.get(): if event.type == pygame.QUIT: running = False elif event.type == pygame.KEYDOWN and event.key == pygame.K_ESCAPE: running = False pygame.display.update() pygame.quit()
def on_collide(paddle, ball): # we collided, so which side? Args are the `Rect`s of paddle and ball. if paddle.centery < ball.centery: print("paddle bottom") elif paddle.centery > ball.centery: print("paddle top") if paddle.centerx < ball.centerx: print("paddle right") elif paddle.centerx > ball.centerx: print("paddle left")
|
@Botenga delete this code you have at the end of your html:
<link rel="stylesheet" href="https://maxcdn.bootstrapcdn.com/bootstrap/3.3.7/css/bootstrap.min.css" integrity="sha384-BVYiiSIFeK1dGmJRAkycuHAHRg32OmUcww7on3RYdg4Va+PmSTsz/K68vbdEjh4u" crossorigin="anonymous">
<link rel="stylesheet" type="text/css" href="bootstrap.css">
</body>
and it should work now
https://cdnjs.cloudflare.com/ajax/libs/jquery/3.2.1/jquery.min.js https://cdnjs.cloudflare.com/ajax/libs/jquery/3.2.1 without the jquery.min.js part
botenga sends brownie points to @sorinr :sparkles: :thumbsup: :sparkles:
html-joe sends brownie points to @sorinr :sparkles: :thumbsup: :sparkles:
var img = document.createElement('img')
img.src = stringified.weather[0].icon
hoxtygen sends brownie points to @mot01 :sparkles: :thumbsup: :sparkles:
document.getElementById('image-container').innerHTML = "<img src = "+stringified.weather[0].icon+">";
hoxtygen sends brownie points to @sorinr and @mot01 :sparkles: :thumbsup: :sparkles:
await try{this.getStreamData()}.catch(error){console.log(error)}; didn't work out when I made getStreamData async. Here's my pen:
primuscovenant sends brownie points to @heroiczero :sparkles: :thumbsup: :sparkles:
catherinewoodward sends brownie points to @terensu-desu :sparkles: :thumbsup: :sparkles:
<html>, <body> sections in them - that is provided by the template. animate.css you can paste it into the resource boxes directly, or they have "quick adds" and a way to search for the package that you want. CodePen is a nice useful site - just remember to stick with "Pen" items for your pages, as a free user (unless you've paid) you only have one "Project". I don't think that there is a limit to the number of "Pen" items? I have seen people get confused by the fact that they can only have one "project"... maybe that will be helpful to be aware of that.
@terensu-desu Sure!
<html>
<head>
<script type="text/javascript" src="https://safi.me.uk/typewriterjs/js/typewriter.js"></script>
<script>
var app = document.getElementById('app');
var typewriter = new Typewriter(app, {
loop: true
});
typewriter.typeString('Hello World!')
.pauseFor(2500)
.deleteAll()
.typeString('Strings can be removed')
.pauseFor(2500)
.deleteChars(7)
.typeString('altered!')
.start();
</script>
</head>
<body>
<div id="app"></div>
</body>
</html>
This is my code currently. Nothing shows when I run it. Just a blank page!
indikoro sends brownie points to @khaduch :sparkles: :thumbsup: :sparkles:
<script> element to the end just before the </body> closing tag. That will insure that the page is loaded before it tries to run the JS. $(document).wait()
hi can someone tell me how to fix this issue
i have setup a fixed navbar , the issue is the banner goes below the navbar
how to get the banner to showup after the navbar?
sorry reycuban, you can't send brownie points to yourself! :sparkles: :sparkles:
reycuban sends brownie points to @tiagocorreiaalmeida :sparkles: :thumbsup: :sparkles:
its not actually, error . but when i trying to post the data and getting back the data its actually working good . but when ever i reload the page the data's i got by the server and displayed in browser is actually removed , why?additional info'
robomongo is not supported for my system
so i cant able to seet the data stored or not!
my system is 32bit os!
its not actually, error . but when i trying to post the data and getting back the data its actually working good . but when ever i reload the page the data's i got by the server and displayed in browser is actually removed , why?additional info'
robomongo is not supported for my system
so i cant able to seet the data stored or not!
my system is 32bit os!this is the problem.
const express = require('express');
const router = express.Router();
const cricketModel = require('../model/score');
router.get('/api/maxi',function(req,res){
res.send({"type" : "get"});
});
router.post('/api/maxi/',function(req,res){
cricketModel.create(req.body).then(function(data){
res.send(data);
console.log(data);
}).catch(err => console.error(err) && res.status(400).send(err));
});
router.delete('/api/maxi/:id',function(req,res){
res.send({"type" : "delete"});
});
router.put('/api/maxi/:id',function(req,res){
res.send({"type" : "update"});
});
module.exports = router;
const express = require('express');
const router = require('./api/router.js');
const bodyParser = require('body-parser');
const mongoose = require('mongoose');
const app = express();
mongoose.connect("mongodb://localhost/gomaxi");
mongoose.Promise = global.Promise;
app.use(express.static('public'));
app.use(bodyParser.json());
app.use(router);
app.listen(4000,function(){
console.log("server is listening for the request on port 4000 , hurray !");
});
its not actually, error . but when i trying to post the data and getting back the data its actually working good . but when ever i reload the page the data's i got by the server and displayed in browser is actually removed , why?
note :
robomongo is not supported for my system
so i cant able to seet the data stored or not!
my system is 32bit os!
data back
router.get('/api/maxi',function(req,res){
console.log('1');
res.send({"type" : "get"});
});
router.post('/api/maxi/',function(req,res){
console.log('2')
cricketModel.create(req.body).then(function(data){
res.send(data);
console.log(data);
}).catch(err => console.error(err) && res.status(400).send(err));
});
router.delete('/api/maxi/:id',function(req,res){
res.send({"type" : "delete"});
});
router.put('/api/maxi/:id',function(req,res){
res.send({"type" : "update"});
});
<!DOCTYPE html>
<html>
<head>
<meta charset="utf-8">
<title>maxi</title>
<link rel="stylesheet" href="https://maxcdn.bootstrapcdn.com/bootstrap/3.3.7/css/bootstrap.min.css">
<script src="https://ajax.googleapis.com/ajax/libs/jquery/3.2.1/jquery.min.js"></script>
</head>
<body>
<input id="search1" placeholder="enter playername">
<input id="search2" placeholder="enter playerscore">
<button class="btn-primary">click</button>
<div class="well"></div>
</body>
<script>
$(document).ready(function(){
$(".btn-primary").click(function(){
console.log("click");
var obj = {
"player" : $("#search1").val(),
"score" : $("#search2").val()
};
$.ajax({
type : "POST",
url : "http://localhost:4000/api/maxi/",
contentType : "application/json",
data : JSON.stringify(obj),
success : function(data){
console.log(data);
$(".well").append("<h1>"+data.player + data.score+"</h1>");
},
error : function(err){
console.log('error' ,err);
},
dataType : "json"
});
});
});
</script>
</html>
```router.post('/', function (req, res, next) {
var user = new User({
firstName: req.body.firstName,
lastName: req.body.lastName,
password: bcrypt.hashSync(req.body.password, 10),
email: req.body.email
});
user.save(function(err, result) {
if (err) {
// If there is an error, return from this function immediately with
// the error code
return res.status(500).json({
title: 'An error occurred',
error: err
});
}
res.status(201).json({
message: 'Saved User',
obj: result
});
});
});```
const express = require('express');
const router = express.Router();
const cricketModel = require('../model/score');
router.get('/api/maxi',function(req,res){
res.send({"type" : "get"});
});
router.post('/api/maxi/',function(req,res){
console.log("2");
cricketModel(req.body).save().then(function(data){
res.send(data);
console.log(data);
}).catch(err => console.error(err) && res.status(400).send(err));
});
router.delete('/api/maxi/:id',function(req,res){
res.send({"type" : "delete"});
});
router.put('/api/maxi/:id',function(req,res){
res.send({"type" : "update"});
});
module.exports = router;
@1532j0004kg how about ```router.post('/api/maxi/', function (req, res, next) {
console.log('2');
console.log(body);
cricketModel.save(function (err, result) {
if (err) {
// If there is an error, return from this function immediately with
// the error code
return res.status(500).json({
title: 'An error occurred',
error: err
});
}
res.status(201).json({
message: 'Saved User',
obj: result
});
});
```
```router.post('/api/maxi/', function (req, res, next) {
console.log('2');
console.log(body);
cricketModel.save(function (err, result) {
if (err) {
// If there is an error, return from this function immediately with
// the error code
return res.status(500).json({
title: 'An error occurred',
error: err
});
}
res.status(201).json({
message: 'Saved User',
obj: result
});
});
```
```router.post('/api/maxi/', function (req, res, next) {
console.log('2');
console.log(body);
cricketModel.save(function (err, result) {
if (err) {
// If there is an error, return from this function immediately with
// the error code
return res.status(500).json({
title: 'An error occurred',
error: err
});
}
res.status(201).json({
message: 'Saved User',
obj: result
});
});
```
router.post('/api/maxi/', function (req, res, next) {
console.log('2');
console.log(body);
cricketModel.save(function (err, result) {
if (err) {
// If there is an error, return from this function immediately with
// the error code
return res.status(500).json({
title: 'An error occurred',
error: err
});
}
res.status(201).json({
message: 'Saved User',
obj: result
});
});
Mongoose: scores.insert({ player: 'q1', score: 1, _id: ObjectId("5a47bd6590f3561
5fc1c5ffe"), __v: 0 })
{ __v: 0, player: 'q1', score: 1, _id: 5a47bd6590f35615fc1c5ffe }
2
Mongoose: scores.insert({ player: 'q1w2', score: 1, _id: ObjectId("5a47bd6c90f35
615fc1c5fff"), __v: 0 })
{ __v: 0,
player: 'q1w2',
score: 1,
_id: 5a47bd6c90f35615fc1c5fff }
2
Mongoose: scores.insert({ player: 'q1w2as', score: 1, _id: ObjectId("5a47bd7390f
35615fc1c6000"), __v: 0 })
{ __v: 0,
player: 'q1w2as',
score: 1,
_id: 5a47bd7390f35615fc1c6000 }
```router.post('/api/maxi/', function (req, res, next) {
console.log('2');
console.log(body);
var cricketModel = new CricketModel({
firstField: req.body.firstField, // Your model fields here
lastField: req.body.lastField,
});
cricketModel.save(function (err, result) {
if (err) {
// If there is an error, return from this function immediately with
// the error code
return res.status(500).json({
title: 'An error occurred',
error: err
});
}
res.status(201).json({
message: 'Saved User',
obj: result
});
});
});```
```router.post('/api/maxi/', function (req, res, next) {
console.log('2');
console.log(body);
var cricketModel = new CricketModel({
firstField: req.body.firstField, // Your model fields here
lastField: req.body.lastField,
});
cricketModel.save(function (err, result) {
if (err) {
// If there is an error, return from this function immediately with
// the error code
return res.status(500).json({
title: 'An error occurred',
error: err
});
}
res.status(201).json({
message: 'Saved User',
obj: result
});
});
});```
Mongoose: scores.insert({ player: 'q1', score: 1, _id: ObjectId("5a47bd6590f3561
5fc1c5ffe"), __v: 0 })
{ __v: 0, player: 'q1', score: 1, _id: 5a47bd6590f35615fc1c5ffe }
2
Mongoose: scores.insert({ player: 'q1w2', score: 1, _id: ObjectId("5a47bd6c90f35
615fc1c5fff"), __v: 0 })
{ __v: 0,
player: 'q1w2',
score: 1,
_id: 5a47bd6c90f35615fc1c5fff }
2
Mongoose: scores.insert({ player: 'q1w2as', score: 1, _id: ObjectId("5a47bd7390f
35615fc1c6000"), __v: 0 })
{ __v: 0,
player: 'q1w2as',
score: 1,
_id: 5a47bd7390f35615fc1c6000 }
C:\Users\dinesh\Desktop\app1>scores.find();
'scores.find' is not recognized as an internal or external command,
operable program or batch file.
C:\Users\dinesh\Desktop\app1>mongo.exe'mongo.exe' is not recognized as an internal or external command,operable program or batch file.C:\Users\dinesh\Desktop\app1>start mongo.exeThe system cannot find the file mongo.exe.
C:\database_mongo\mongodb-win32-i386-3.2.18-4-g752daa3\bin>
> scores.find();
2017-12-30T08:49:19.995-0800 E QUERY [thread1] ReferenceError: scores is not
defined :
@(shell):1:1
C:\database_mongo\mongodb-win32-i386-3.2.18-4-g752daa3\bin>mongo
2017-12-30T08:50:02.775-0800 I CONTROL [main] Hotfix KB2731284 or later update
is not installed, will zero-out data files
MongoDB shell version: 3.2.18-4-g752daa3
connecting to: test
Server has startup warnings:
2017-12-30T06:55:07.242-0800 I CONTROL [initandlisten]
2017-12-30T06:55:07.242-0800 I CONTROL [initandlisten] ** WARNING: This 32-bit
MongoDB binary is deprecated
2017-12-30T06:55:07.243-0800 I CONTROL [initandlisten]
2017-12-30T06:55:07.244-0800 I CONTROL [initandlisten]
2017-12-30T06:55:07.245-0800 I CONTROL [initandlisten] ** NOTE: This is a 32 bi
t MongoDB binary.
2017-12-30T06:55:07.270-0800 I CONTROL [initandlisten] ** 32 bit builds a
re limited to less than 2GB of data (or less with --journal).
2017-12-30T06:55:07.271-0800 I CONTROL [initandlisten] ** Note that journ
aling defaults to off for 32 bit and is currently off.
2017-12-30T06:55:07.272-0800 I CONTROL [initandlisten] ** See http://doch
ub.mongodb.org/core/32bit
2017-12-30T06:55:07.274-0800 I CONTROL [initandlisten]
>
> use database
switched to db database
> scores.find()
2017-12-30T08:52:26.512-0800 E QUERY [thread1] ReferenceError: scores is not
defined :
@(shell):1:1
> collections.find()
2017-12-30T08:52:36.159-0800 E QUERY [thread1] ReferenceError: collections is
not defined :
@(shell):1:1
C:\database_mongo\mongodb-win32-i386-3.2.18-4-g752daa3\bin>mongo
2017-12-30T08:50:02.775-0800 I CONTROL [main] Hotfix KB2731284 or later update
is not installed, will zero-out data files
MongoDB shell version: 3.2.18-4-g752daa3
connecting to: test
Server has startup warnings:
2017-12-30T06:55:07.242-0800 I CONTROL [initandlisten]
2017-12-30T06:55:07.242-0800 I CONTROL [initandlisten] ** WARNING: This 32-bit
MongoDB binary is deprecated
2017-12-30T06:55:07.243-0800 I CONTROL [initandlisten]
2017-12-30T06:55:07.244-0800 I CONTROL [initandlisten]
2017-12-30T06:55:07.245-0800 I CONTROL [initandlisten] ** NOTE: This is a 32 bi
t MongoDB binary.
2017-12-30T06:55:07.270-0800 I CONTROL [initandlisten] ** 32 bit builds a
re limited to less than 2GB of data (or less with --journal).
2017-12-30T06:55:07.271-0800 I CONTROL [initandlisten] ** Note that journ
aling defaults to off for 32 bit and is currently off.
2017-12-30T06:55:07.272-0800 I CONTROL [initandlisten] ** See http://doch
ub.mongodb.org/core/32bit
2017-12-30T06:55:07.274-0800 I CONTROL [initandlisten]
>
C:\mongodbs
C:\database_mongo\mongodb-win32-i386-3.2.18-4-g752daa3\bin>mongod --dbpath C:\mongodbs
C:\database_mongo\mongodb-win32-i386-3.2.18-4-g752daa3\bin>mongo
C:\database_mongo\mongodb-win32-i386-3.2.18-4-g752daa3\bin>mongod --dbpath C:\mo
ngodbs
2017-12-30T08:59:19.588-0800 I CONTROL [main]
2017-12-30T08:59:19.592-0800 W CONTROL [main] 32-bit servers don't have journal
ing enabled by default. Please use --journal if you want durability.
2017-12-30T08:59:19.593-0800 I CONTROL [main]
2017-12-30T08:59:19.602-0800 I CONTROL [main] Hotfix KB2731284 or later update
is not installed, will zero-out data files
2017-12-30T08:59:19.611-0800 I CONTROL [initandlisten] MongoDB starting : pid=3
544 port=27017 dbpath=C:\mongodbs 32-bit host=dinesh007
2017-12-30T08:59:19.614-0800 I CONTROL [initandlisten] targetMinOS: Windows Vis
ta/Windows Server 2008
2017-12-30T08:59:19.615-0800 I CONTROL [initandlisten] db version v3.2.18-4-g75
2daa3
2017-12-30T08:59:19.617-0800 I CONTROL [initandlisten] git version: 752daa30609
5fb1610bb5db13b7b106ac87ec6cb
2017-12-30T08:59:19.618-0800 I CONTROL [initandlisten] allocator: tcmalloc
2017-12-30T08:59:19.619-0800 I CONTROL [initandlisten] modules: none
2017-12-30T08:59:19.622-0800 I CONTROL [initandlisten] build environment:
2017-12-30T08:59:19.623-0800 I CONTROL [initandlisten] distarch: i386
2017-12-30T08:59:19.624-0800 I CONTROL [initandlisten] target_arch: i386
2017-12-30T08:59:19.625-0800 I CONTROL [initandlisten] options: { storage: { db
Path: "C:\mongodbs" } }
2017-12-30T08:59:19.632-0800 E NETWORK [initandlisten] listen(): bind() failed
errno:10048 Only one usage of each socket address (protocol/network address/port
) is normally permitted. for socket: 0.0.0.0:27017
2017-12-30T08:59:19.633-0800 E STORAGE [initandlisten] Failed to set up sockets
during startup.
2017-12-30T08:59:19.635-0800 I CONTROL [initandlisten] dbexit: rc: 48
omgmerrickd sends brownie points to @vasejs and @import :sparkles: :thumbsup: :sparkles:
function palindrome(str) {var x = str.split('').reverse().join('');var y = x.replace(/[\W_]/g, '');var palindr = y.toLowerCase();if ( palindr == str){return true;}else {return false;}
}
palindrome("eye");
sorry vasejs, you can't send brownie points to yourself! :sparkles: :sparkles:
``` function palindrome(str) {
var x = str.split('').reverse().join('');
var y = x.replace(/[\W_]/g, '');
var palindr = y.toLowerCase();
if ( palindr == str){
return true;
}
else {
return false;
}
}
palindrome("eye"); ```
sakisbal sends brownie points to @vasejs :sparkles: :thumbsup: :sparkles:
function palindrome(str) {
var x = str.split('').reverse().join('');
var y = x.replace(/[\W_]/g, '');
var palindr = y.toLowerCase();
if ( palindr == str){
return true;
}
else {
return false;
}
}
palindrome("eye");
return str.replace(/[\W_]/g, '').toLowerCase()=== str.replace(/[\W_]/g, '').toLowerCase().split('').reverse().join('');
|
æ¬æä¸»è¦æ¯ç»åºäºå
³é®è¯çä¸ç§æ°çå®ä¹ï¼å¹¶ä¸åºäºWord2Vecç»åºäºä¸ä¸ªå®ç°æ¹æ¡ãè¿ç§å
³é®è¯çå®ä¹æ¯èªç¶çãåççï¼Word2Vecåªæ¯ä¸ä¸ªç®åççå®ç°æ¹æ¡ï¼å¯ä»¥åºäºåæ ·çå®ä¹ï¼æ¢ç¨å
¶ä»çæ¨¡åæ¥å®ç°ã
è¯´å°æåå ³é®è¯ï¼ä¸è¬ä¼æ³å°TF-IDFåTextRankï¼å¤§å®¶æ¯å¦æ³è¿ï¼Word2Vecè¿å¯ä»¥ç¨æ¥æåå ³é®è¯ï¼èä¸ï¼ç¨Word2Vecæåå ³é®è¯ï¼å·²ç»åæ¥å«æäºè¯ä¹ä¸ççè§£ï¼èä¸ä» ä» æ¯ç®åçç»è®¡äºï¼èä¸è¿æ¯æ çç£çï¼
ä»ä¹æ¯å ³é®è¯ï¼ #
è¯ç¶ï¼TF-IDFåTextRankæ¯ä¸¤ç§æåå ³é®è¯çå¾ç»å ¸çç®æ³ï¼å®ä»¬é½æä¸å®çåçæ§ï¼ä½é®é¢æ¯ï¼å¦æä»æ¥æ²¡çè¿è¿ä¸¤ä¸ªç®æ³ç读è ï¼ä¼æè§ç®ç´æ¯å¼æ³å¤©å¼çç»æï¼ä¼°è®¡å¾é¾è½å¤ä»é¶æå®ä»¬æé åºæ¥ãä¹å°±æ¯è¯´ï¼è¿ä¸¤ç§ç®æ³è½ç¶çä¸å»ç®åï¼ä½å¹¶ä¸å®¹ææ³å°ãè¯æ³ä¸ä¸ï¼æ²¡æå¦è¿ä¿¡æ¯ç¸å ³ç论çåå¦ï¼ä¼°è®¡æä¹ä¹é¾ä»¥ç解为ä»ä¹IDFè¦åä¸ä¸ªå¯¹æ°ï¼ä¸ºä»ä¹ä¸æ¯å ¶ä»å½æ°ï¼åæå¤å°è¯»è ä¼ç ´å¤©èå°æ³å°ï¼ç¨PageRankçæè·¯ï¼å»å¤æä¸ä¸ªè¯çéè¦æ§ï¼
说å°åºï¼é®é¢å°±å¨äºï¼æåå ³é®è¯åææ¬æè¦ï¼çä¸å»é½æ¯ä¸ä¸ªå¾èªç¶çä»»å¡ï¼æè°çæ£æèè¿ï¼å ³é®è¯çå®ä¹æ¯ä»ä¹ï¼è¿é䏿¯è¦ä½ 廿¥æ±è¯è¯å ¸ï¼è·å¾ä¸å¤§å æåçå®ä¹ï¼èæ¯é®ä½ æ°å¦ä¸çå®ä¹ãå ³é®è¯å¨æ°å¦ä¸çåçå®ä¹åºè¯¥æ¯ä»ä¹ï¼æè è¯´ï¼æä»¬è·åå ³é®è¯çç®çæ¯ä»ä¹ï¼
徿¾ç¶ï¼å
³é®è¯ä¹å¥½ï¼æè¦ä¹å¥½ï¼æä»¬å¸æè½å¤å°½å¯è½å¿«å°è·åæç« ç大æï¼å¦æä¸ç¯æç« çå
³é®è¯æ¯â深度å¦ä¹ âï¼æä»¬å°±ä¼ç¥éï¼è¿ç¯æç« ä¸å¯è½å¤§è°ç¹è°âé¢éæ¯æä¹ç»æçâï¼ä¹å°±æ¯è¯´ï¼æä»¬å¯ä»¥ç±å
³é®è¯æ¥çå°ææ¬ç大æï¼ç¨æ°å¦æ¥è¡¨ç¤ºï¼é£å°±æ¯æ¡ä»¶æ¦ç
$$p(s|w_i)$$
è¿éç$s$代表ç䏿®µææ¬ï¼$w_i$æ¯ææ¬ä¸çæä¸ªè¯ï¼å¦æ$w_i$æ¯ææ¬çå
³é®è¯ï¼é£ä¹åºè¯¥ä½¿å¾ä¸è¿°æ¦çæå¤§ãä¹å°±æ¯è¯´ï¼æä»¬åªéè¦å¯¹å¥å䏿æçè¯ï¼ç®ä¸éä¸è¿°æ¦çï¼ç¶åéåºæåï¼å°±å¯ä»¥æåå
³é®è¯äºã说ç½äºï¼å
³é®è¯å°±æ¯æè½è®©æä»¬çå°åæçè¯è¯ãæä¹ä¼°ç®è¿ä¸ªæ¦çï¼ç®åä½¿ç¨æ´ç´ è´å¶æ¯å设就好ï¼å¦æ$s$ç±$n$个è¯$w_1,w_2,\dots,w_n$ç»æï¼é£ä¹
$$p(s|w_i)=p(w_1,w_2,\dots,w_n|w_i)=\prod_{k=1}^n p(w_k|w_i)$$
è¿æ ·ï¼æä»¬åªéè¦ä¼°ç®è¯ä¸è¯ä¹é´ç转移æ¦ç$p(w_k|w_i)$ï¼å°±å¯ä»¥å¾å°æ¡ä»¶æ¦ç$p(s|w_i)$äºï¼ä»è宿å
³é®è¯çæåã
è¿è·Word2Vecåæä»ä¹èç³»å¢ï¼ä¸ºäºä¼°ç®$p(w_k|w_i)$ï¼éè¦æå¤§éçææ¬è¿è¡ç»è®¡ï¼å¥½å¨è¿ä¸ªè¿ç¨æ¯æ çç£çï¼ç»è®¡æ¯ä»¶å¾ç®åçäºæ ãç¶èï¼æä»¬ææ´å¥½çå·¥å ·ï¼ä»ä¹å·¥å ·ææ é¿äºå¯¹$p(w_k|w_i)$ç建模å¢ï¼è¯»è å¯è½å°±å·²ç»çå°äºï¼æ¾ç¶å°±æ¯Word2Vecåï¼Word2VecçSkip-Gram模åï¼ä¸å°±æ¯ç¨æ¥å¯¹è¿ä¸ªæ¦çè¿è¡å»ºæ¨¡çä¹ï¼é´äºWord2Vecçâå¿«åç âçç¹æ§ï¼æä»¬æ²¡çç±å¼ä¹ä¸ç¨åï¼ï¼å½ç¶ï¼æç §æç« å¼å¤´ç说æ³ï¼å¹¶ä¸ä¸å®è¦ç¨è´å¶æ¯åè®¾ï¼æ´å ä¸ä¸å®è¦ç¨Word2Vecæ¥ç®ï¼ä½æ¯ï¼å ³é®è¯çå®ä¹æ¬èº«åºè¯¥æ¯åççï¼
Word2Vecç®æ¦ç #
è¿æ¶å读è åºè¯¥ä¼æç½ï¼ä¸ºä»ä¹æå¨åä¸¤ç¯æç« ä¸ä¼é£ä¹å¼ºè°Skip-Gram + Huffman Softmaxè¿ä¸ªç»åäºï¼å 为è¿ä¸ªç»åå°±æ¯å¯¹$p(w_k|w_i)$è¿è¡å»ºæ¨¡çãå½ç¶ï¼ç±äºHuffman Softmaxçç¹æ§ï¼æä»¬è¦ç®$p(w_k|w_i)$ï¼éè¦è´¹ä¸äºå¨æï¼åè代ç å¦ä¸ï¼
import numpy as np
import gensim
model = gensim.models.word2vec.Word2Vec.load('word2vec_wx')
def predict_proba(oword, iword):
iword_vec = model[iword]
oword = model.wv.vocab[oword]
oword_l = model.syn1[oword.point].T
dot = np.dot(iword_vec, oword_l)
lprob = -sum(np.logaddexp(0, -dot) + oword.code*dot)
return lprob
è¿åºæ¬ä¸å°±æ¯ç´æ¥åègensimä¸çWord2Vecçscore_sg_pair彿°åçï¼ç®åçæµç¨æ¯ï¼ååº$w_k$çHuffmanç¼ç ï¼è·¯å¾ï¼ï¼ååº$w_i$çè¯åéï¼ç¶åæ ¹æ®è·¯å¾ï¼è¦æè·¯å¾ä¸æ¯ä¸ªèç¹çæ¦çé½ç®åºæ¥ï¼ç¶åä¹èµ·æ¥ï¼å¾å°$p(w_k|w_i)$ãè¿æ¯ç®å¾æ¯æ¦ç对æ°ï¼åºè¯¥ä¹æ³åä¸ºå æ³ãæåçæ¦çæ¯æä¹è®¡ç®çå¢ï¼äºå®ä¸ï¼æç
§Word2Vecçå
¬å¼ï¼æ¯ä¸ªèç¹çæ¦çå¯¹æ°æ¯ï¼
$$\begin{aligned}&\log \left(\frac{1}{1+e^{-\boldsymbol{x}^{\top} \boldsymbol{\theta}}}\right)^{1-d}\left(1-\frac{1}{1+e^{-\boldsymbol{x}^{\top} \boldsymbol{\theta}}}\right)^{d}\\
=&-(1-d)\log (1+e^{-\boldsymbol{x}^{\top} \boldsymbol{\theta}}) - d \log (1+e^{-\boldsymbol{x}^{\top} \boldsymbol{\theta}}) - d \boldsymbol{x}^{\top} \boldsymbol{\theta}\\
=&-\log (1+e^{-\boldsymbol{x}^{\top} \boldsymbol{\theta}}) - d \boldsymbol{x}^{\top} \boldsymbol{\theta}\end{aligned}$$
è¿éç$\boldsymbol{\theta}$æ¯èç¹åéï¼$\boldsymbol{x}$æ¯è¾å
¥è¯åéï¼$d$æ¯è¯¥èç¹çç¼ç ï¼é0å³1ï¼ã使¯ï¼å®æ¹çscore_sg_pair彿°å¹¶ä¸æ¯è¿æ ·åçï¼é£æ¯å 为
$$\begin{aligned}&-\log (1+e^{-\boldsymbol{x}^{\top} \boldsymbol{\theta}}) - d \boldsymbol{x}^{\top} \boldsymbol{\theta}\\
=&-\log \bigg[e^{d \boldsymbol{x}^{\top}\theta}(1+e^{-\boldsymbol{x}^{\top} \boldsymbol{\theta}})\bigg]\\
=&-\log \bigg(e^{d \boldsymbol{x}^{\top}\theta}+e^{(d-1)\boldsymbol{x}^{\top} \boldsymbol{\theta}}\bigg)\\
=&-\log \bigg(1+e^{-(-1)^d \boldsymbol{x}^{\top}\theta}\bigg)\end{aligned}$$
å®è·µä¸ºä¸ #
æäºä¸é¢çéºå«ï¼ç°å¨ç®å ³é®è¯å°±ç®åäºï¼
from collections import Counter
def keywords(s):
s = [w for w in s if w in model]
ws = {w:sum([predict_proba(u, w) for u in s]) for w in s}
return Counter(ws).most_common()
import pandas as pd #å¼•å…¥å®ƒä¸»è¦æ˜¯ä¸ºäº†æ›´å¥½çš„æ˜¾ç¤ºæ•ˆæžœ
import jieba
s = u'å¤ªé˜³æ˜¯ä¸€é¢—æ’æ˜Ÿ'
pd.Series(keywords(jieba.cut(s)))
è¾åºç»ææ¯
0 (ææ, -27.9013707845)
1 (太é³, -28.1072913493)
2 (ä¸é¢, -30.482187911)
3 (æ¯, -36.3372344659)
å ¶å®ä¾åï¼
>>> s=u'æå¹³åºæ¿åºç½ç«æ¾ç¤ºï¼æåä¸éµæ¯ä¸çä¸ä¿å宿´ãåè¬ç叿å¤çå¢è¬ç¾¤ï¼1961年被å½å¡é¢å ¬å¸ä¸ºç¬¬ä¸æ¹å ¨å½éç¹æç©ä¿æ¤åä½ï¼å¹¶äº2003年被å为ä¸çé产åå½ã'
>>> pd.Series(keywords(jieba.cut(s)))
0 (æç©ä¿æ¤, -261.691625676)
1 (åå½, -272.297758506)
2 (ä¸çé产, -273.943120665)
3 (ç¬¬ä¸æ¹, -280.781786703)
4 (å为, -281.663865896)
5 (æåä¸éµ, -286.298893108)
6 (å¢è¬ç¾¤, -287.463013816)
...
>>> s=u'é宿°åºæ¨ªç©ºåºä¸ï¼å¸å¼äºä¼å¤å¤å°çæ¿å®¢åå»è´æ¿ãç¶èï¼å¨å½å°æ¿åºéæ³éå¶éæ³çæ¿ã楼å¸å»ç»çèæ¯ä¸ï¼é£äºææ£ä¹°æ¿é±å´å¨é宿°åºæ å¤ä¸æçæèµéæ±ï¼è¢«æ¤åºå°å¨è¾¹å°åºã'
>>> pd.Series(keywords(jieba.cut(s)))
0 (çæ¿å®¢, -326.997266407)
1 (楼å¸, -336.176584187)
2 (çæ¿, -337.190896137)
3 (ä¹°æ¿, -344.613473556)
4 (è´æ¿, -346.396359454)
5 (éæ³, -350.207272082)
6 (å¤å°, -355.860419218)
>>> s=u'妿ç»ä¸é¨å¤è£ çµå½±è®¾è®¡æè£ ï¼å¿ é¡»è¦èèæ äºåçå¨åªä¸ªæä»£ï¼æ±æä¸ºå®½è¢å¤§è¢ï¼æ¸ æåæ¯é©¬è¤æè¢ãå¯å¨äº¬å§èå°ä¸ï¼å ä¹ä»»ä½ä¸ä¸ªåå²äººç©ï¼æ ¹æ®ä»çæ§å«å¹´é¾ã身份å°ä½ãåºæ¬æ§æ ¼ççï¼é½å¯ä»¥å¨ç°æçæé¥°éæ¾å°åéçè¡å¤´ã '
>>> pd.Series(keywords(jieba.cut(s)))
0 (æä»£, -485.150966757)
1 (人ç©, -493.759615898)
2 (å¤è£ , -495.478962392)
3 (æ±æ, -503.409908377)
4 (æ¸ æ, -503.45656029)
5 (æè¢, -504.76313228)
6 (身份, -507.624260109)
大家å¯ä»¥èªå·±å°è¯ã妿è¦å¨èªå·±çè¯æä¸å°è¯ï¼é£å°±ç´æ¥å¨è¯æä¸è®ç»ä¸ä¸ªWord2Vecï¼Skip-Gram + Huffman Softmaxï¼æ¨¡åå³å¯ï¼ç¶åè°ç¨ä¸è¿°ä»£ç å°±å¯ä»¥äºã
åºè¯¥ä¼æçæ #
æç
§æä»¬ä¸å¼å§çæ³æ³ï¼$p(w_k|w_i)$åºè¯¥è¦å¨æ´ä¸ªå¥åå
ç»è®¡è®¡ç®ï¼èWord2Vecä»
ä»
å¼äºä¸ªçªå£æ¥è®¡ç®ï¼è¿åçåï¼äºå®ä¸ï¼Word2Vecè½ç¶ä»
ä»
å¼äºçªå£ï¼ä½å·²ç»æå建ç«äºç¸ä¼¼è¯ä¹é´çèç³»ï¼ä¹å°±æ¯è¯´ï¼ç¨Word2Vecåä¸è¿°è¿ç¨ï¼äºå®ä¸å°âç¸ä¼¼è¯è¯âè¿è¡å å èµ·æ¥è¿è¡è¯ä¼°ï¼ç¸æ¯ä¹ä¸ï¼TF-IDFçæ¹æ³ï¼ä»
ä»
æ¯å°âç¸åè¯âå å èµ·æ¥è¿è¡è¯ä¼°ï¼å æ¤ï¼æä»¬è¯´Word2Vecæåå
³é®è¯ï¼è½å¤åæ¥ç»åè¯ä¹æ¥å¤æäºãèä¸ï¼Word2Vecéè¿èè$p(w_k|w_i)$æ¥èèäºæç« å
é¨çå
³èï¼è¿éæç¹TextRankçå³éäºï¼æ¯ä¸ä¸ªäºå
模åï¼èTF-IDFä»
ä»
èèè¯æ¬èº«çä¿¡æ¯éï¼ä»
ä»
æ¯ä¸ä¸ªä¸å
模åã
èä¸ï¼Word2Vecæ¯åºäºç¥ç»ç½ç»è®ç»çï¼èªå¸¦å¹³æ»åè½ï¼åªæä¸¤ä¸ªè¯è¯å¨ææ¬ä¸æªæ¾å ±ç°ï¼ä¹è½å¾å°ä¸ä¸ªè¾ä¸ºåççæ¦çã
å½ç¶è¿æ ·åç代价就æ¯ï¼TF-IDFç®æ³çæçæ¯$\mathscr{O}(N)$ï¼èç¨Word2Vecæåï¼æçæ¾ç¶æ¯$\mathscr{O}(N^2)$ï¼è¿éç$N$æ¯å¥åä¸çè¯æ°ã
转载å°è¯·å
æ¬æ¬æå°åï¼https://spaces.ac.cn/archives/4316
æ´è¯¦ç»ç转载äºå®è¯·åèï¼ãç§å¦ç©ºé´FAQã
妿æ¨è§å¾æ¬æè¿ä¸éï¼æ¬¢è¿å享/æèµæ¬æãæèµå¹¶éè¦ä»ä¸è·å¾æ¶çï¼èæ¯å¸æç¥éç§å¦ç©ºé´è·å¾äºå¤å°è¯»è
ççå¿å
³æ³¨ãå½ç¶ï¼å¦æä½ æ è§å®ï¼ä¹ä¸ä¼å½±åä½ çé
读ã忬¡è¡¨ç¤ºæ¬¢è¿åæè°¢ï¼
妿æ¨éè¦å¼ç¨æ¬æï¼è¯·åèï¼
èåæ. (Apr. 07, 2017). ããä¸å¯æè®®çWord2Vecã 3.æåå ³é®è¯ ã[Blog post]. Retrieved from https://spaces.ac.cn/archives/4316
ä½ ä¹è®¸è¿å¯¹ä¸é¢çå 容æå ´è¶£
ä½ å¯è½ä¸éè¦BERT-flowï¼ä¸ä¸ªçº¿æ§åæ¢åª²ç¾BERT-flow
è·é£ç©ç©ç®åæå¤§ç䏿GPT2模åï¼bert4kerasï¼
å½GPTéä¸ä¸å½è±¡æ£ï¼åè¿æç« è§£è¿é¢ï¼è¦ä¸åæ¥ä¸çæ£ï¼
é£ä¸ªå± æ¦çT5模åï¼ç°å¨å¯ä»¥å¨ä¸æä¸ç©ç©äº
ç¨ALBERTåELECTRAä¹åï¼è¯·ç¡®è®¤ä½ ççäºè§£å®ä»¬
TeaForNï¼è®©Teacher Forcingæ´æâè¿è§âä¸äº
BERTå¯ä»¥ä¸å 年级äºï¼Seq2Seqâ硬åâå°å¦æ°å¦åºç¨é¢
å¿ é¡»è¦GPT3åï¼ä¸ï¼BERTçMLM模åä¹è½å°æ ·æ¬å¦ä¹
æé䏿ç¹ï¼åºäºè¯é¢ç²åº¦ç䏿WoBERT
卿å个DialoGPTï¼åºäºLMççæå¼å¤è½®å¯¹è¯æ¨¡å
|
Handling Python Memory Issues when faced with Big Data
As the programming language Python develops over time, added functionality improves both its usability and performance. Python has become (if not) the foremost language in the Data Science and its handling of big data sets is amongst one of the reasons why.
It’s no wonder that the language is highly favoured with libraries like Pandas and core functionality like Generators that are both so easy to use. Commentary does exist of new languages like Julia or Go taking precedence but regardless, Python is here for the long run.
Researchers from all corners of academia complain about computational memory issues. Large data-sets are inherent to the problem in fields like Bioinformatics, Finance and broadly Machine Learning, so efficient and effective Memory Handling is required as a standard.
With Big Data comes Big Memory Issues
Let’s take the following example. Say we want to count how many rows are in a file so we write some inefficient code as follows:
open_file = read_csv("standard_csv_file.csv")
row_count = 0
for row in open_file:
row_count += 1
print("Row count is {}".format(row_count))
Now looking at this example, the function read_csv opens the file and loads all the contents into open_file. Then the program iterates over this list and increments row_count.
However, once the file size grows big (let’s say, to 2gb), does this piece of code still work as well? What if the file size is larger than the memory you have available?
Now, unfortunately, if the file size is stupendously large, you’ll probably notice your computer slows to a halt as you try to load it in. You might even need to kill the program entirely.
So, how can you handle these huge data files?
Generator functions allow you to declare a function that behaves like an iterator. Once an item has been presented from the iterator it’s expected to not be used again and can be cleared from memory. That means at any one point in time you only have one item in memory, rather than the entire problem set.
So in terms of counting rows in a file, we now load up one row at a time instead of loading the whole file all at once. To do this, we can simply rework the code and introduce the keyword yield:
def read_csv(file_name):
for row in open(file_name, "r"):
yield row
By introducing the keyword yield, we’ve essentially turned the function into a generator function. This new version of our code opens a file, loops through each line, and yields each row.
The Python Yield Statement
When the code you’ve written reaches the yield statement, the program will suspend execution there and return the corresponding value to you. Now when a function is suspended in this case, the state of the function is saved somewhere magical. Everything linked to the state of that function is saved, including any variable bindings local to the generator, the instruction pointer, the internal stack, and any exception handling.
Let’s make a literal example. If the yield statement pauses code and suspends execution, then calling the function again should continue where it left off, so let’s make a function with multiple yield statements:
>>> def double_yield():
... yield "This will print string number one"
... yield = "This will print string number two"
>>> double_obj = double_yield()
>>> print(next(double_obj))
This will print string number one
>>> print(next(double_obj))
This will print string number two
>>> print(next(multi_obj))
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
StopIteration
Given the advent of Big Data, large data sets are incredibly prevalent these days so memory-efficient coding is a must for Data Scientists and Machine Learning practitioners alike.
The article above highlights key benefits to using Generators and an example is shown in which a Generator is clearly favoured to not using one as its shown to greatly enhance memory handling when faced with large data sets.
Generators have become an integral part of my coding and as a practitioner myself, I encourage you to try them out!
Thanks again! Please send me a message if you have any questions! =]
|
wiuppy
wiuppy is a Python3 wrapper for the Where's It Up API (version 4).
Requirements
Python 3.2
requests
Installation
Once you have cloned the repository, you can install the module with pip (orpip3 on Ubuntu:
$ git clone https://github.com/WonderNetwork/wiuppy.git
$ cd wiuppy
$ sudo pip3 install .
Usage
See the official Where's It Up documentation for full API details.
Raw API access
import wiuppy
# get the servers available
api = wiuppy.WIU(wiu_client_id, wiu_client_token)
print(api.locations())
# submit a new job requesting pings from Denver to www.google.com
api = wiuppy.WIU(wiu_client_id, wiu_client_token)
job_id = api.submit('http://www.google.com', [ 'ping' ], [ 'Denver' ])
# get the API response as a python dictionary
results = api.retrieve(job_id) # tasks will be 'in progress' until they complete
Access through the Job interface
import wiuppy
# submit a new job and get the results
api = wiuppy.WIU(wiu_client_id, wiu_client_token)
job = wiuppy.Job(api)
job.uri = 'http://www.google.com'
job.tests = [ 'ping', 'dig', 'trace' ]
job.locations = [ 'Denver', 'Lima', 'Sydney' ]
job_id = job.submit().id # fluent interface
job.retrieve(poll=True) # query the API until all the tasks are done
job.results # job results as a python dict
print(job) # job result details as a formatted JSON string
# get the results from a previously submitted job
wiuppy.Job(api, job_id).retrieve()
Command-line client
For convenience, a command-line client is bundled with this project.
usage: wiuppy.py [-h] [-C CLIENT] [-T TOKEN] [-u URI] [-t TESTS]
[-l LOCATIONS] [-j JOB] [-p] [-f]
Make a request against the WIU API
optional arguments:
-h, --help show this help message and exit
-C CLIENT, --client CLIENT
Where's It Up client ID (required)
-T TOKEN, --token TOKEN
Where's It Up client token (required)
-u URI, --uri URI uri to query
-t TESTS, --tests TESTS
comma-separated tests to run
-l LOCATIONS, --locations LOCATIONS
comma-separated server locations to run from
-j JOB, --job JOB job ID for an existing request to retrieve
-p, --poll query the API until the job is complete
-f, --findtowel
Run without arguments to get a list of available servers, with -j to get theresults from an existing job, or with -u/-t/-l to submit a new job.
If you'd rather not drop your WIU client and token in the command line every time you make a request, you can use either environment variables:
export WIUPPY_CLIENT=abcdef
export WIUPPY_TOKEN=123456
or a config file at ~/.wiuppy (%USERPROFILE%\.wiuppy on Windows):
[Auth]
client=abcdef
token=123456
|
Artificial Intelligence has vast-ranging attention and its utilization in the healthcare business or industry. As an intense learner and a Kaggle beginner, I chose to work on the Malaria Cells dataset to get a little hands-on experience and discover how to work with CNN, Keras, and pictures on the Kaggle platform.
In many points I love about Kaggle is the extensive knowledge it exists in the form of Kernels and Discussions. Taking ideas and references from different kernels and specialists really assisted me in getting more skilled at creating highly accurate results. Take a look at other kernels and see their strategy to gain more insights for your own improvement and knowledge building.
You can download the data from Kaggle.
Importing dataset and libraries
I started by importing pandas, numpy, and matplotlib. I chose to run Keras with Tensorflow backend to perform the neural network model. So, then imported a number of layers from Keras.layers including Convolution2D, MaxPooling2D, Flatten, Dense, BatchNormalization, and dropout. We will be using the Sequential model. To get started with pictures in the dataset, I imported os, cv2, and Image libraries.
# Libraries used for data visualization and manipulation
import numpy as np
np.random.seed(1000)
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
from sklearn.model_selection import train_test_split
# Libraries used for creating the CNN model
import keras
from keras.layers import Convolution2D, MaxPooling2D, Flatten, Dense, BatchNormalization, Dropout
from keras.models import Sequential
# Libraries used for working with pictures
import os
import cv2
from PIL import Image
Importing the dataset
As you know, in Kaggle, any data files are placed inside the input folder that is one level up from wherever the notebook is located. The pictures are in the cell_images folder. Therefore, I set up the data directory as DATA_DIR to point to that place or location. To save the characteristics or features, I utilized the variable dataset, and as labels, I did label. For this example, I set every picture size to be 64×64.
DATA_DIR = 'your_direction_of_cell_images/'
SIZE = 64
dataset = []
label = []
The next and the following measure is to import our data. The infected cell pictures are in the Parasitized folder, and the uninfected pictures are in the Uninfected folder.
Toward both folders, I iterated over every file that has the extension png.
parasitized_images = os.listdir(DATA_DIR + 'Parasitized Cells/')
for i, image_name in enumerate(parasitized_images):
try:
if (image_name.split('.')[1] == 'png'):
image = cv2.imread(DATA_DIR + 'Parasitized Cells/' + image_name)
image = Image.fromarray(image, 'RGB')
image = image.resize((SIZE, SIZE))
dataset.append(np.array(image))
label.append(0)
except Exception:
print("Could not read image {} with name {}".format(i, image_name))
For infected cell pictures, I read the picture by using cv2.imread(), then change it from an array utilizing Image.fromarray(), and after that, resize it to 64×64. Lastly, I kept it to the dataset variable and added 0 for each of these pictures to the label. I repeated the identical method for the uninfected cell pictures but fixed the label as 1 at this time.
Visualize the dataset
We gonna use matplotlib to randomly plotting 5 infected and 5 uninfected cell pictures.
infected cells pictures
uninfected cell pictures
Implementing the CNN model
The CNN model is one of the most powerful and efficient neural networks to work with pictures and perform classifications. I worked with Keras to build the neural network model.
Convolution2D
We built a convolution kernel. I initiated some features as described below:
filter: The first parameter specifies the shape of the layer output. In our case, for both two layers, I put the value as 32.
kernel_size: It represents the window size that we need to do, and it will traverse along with the picture. I fixed it as 3×3.
input_shape: It can be used to determine the size of the input for every picture. In this design, I am working with images that have a size of 64×64, and the pictures are colored. These channels are thus 3. So, theinput_shapeit is gonna be (64, 64, 3). We want to setinput_shapeonly for the first layer.
activation: The activation function that we used here isrelu, which is the (Rectified Linear Unit) as the default activation function.
MaxPool2D
This MaxPool2D method is used to downscale the outputs, and it holds the next parameters:
pool_size: It represents the size of the matrix, which determines the number of pixel values that will be changed or converted to 1 value. We gonna be using the value as 2×2, so a picture of the size 62×62 gonna be converted to 31×31.
data_format: It explains that, at the input, the channels are determined at the start or at the end. However, in our case, this third value is for the channel in (64, 64, 3), and I fixeddata_formatto bechannels_last.
BatchNormalization
This method normalizes the earlier activation function output, and I changed only one parameter:
axis: It represents the axis to be normalized. As I used channels_last, I set the value as -1.
Dropout
This method picks some of the values randomly to be set as 0 so to stop overfitting in the CNN model, and we used just the rate parameter:
rate: The input fractions that gonna be dropped. I put the rate at 0.2.
Flatten
This method flattens the whole n-dimensional matrix to a single array. Hence, if the array size was 64x64x3, it transformed into an array of size 12,288.
Dense
This method determines a densely connected CNN layer, and I have set the following parameters:
activation: This function sets the activation function, which is, in our case, thereluactivation function, just for the latest output layer. For the latest dense layer, I used thesigmoidactivation function.
units: This specifies the neurons number in the provided layer. I have built three layers with neuron count as (512, 256, 2).
Construction of the CNN Model in our example
I have built a Sequential CNN model (classifier).
classifier = Sequential()
classifier.add(Convolution2D(32, (3, 3), input_shape = (SIZE, SIZE, 3), activation = 'relu'))
classifier.add(MaxPooling2D(pool_size = (2, 2), data_format="channels_last"))
classifier.add(BatchNormalization(axis = -1))
classifier.add(Dropout(0.2))
classifier.add(Convolution2D(32, (3, 3), activation = 'relu'))
classifier.add(MaxPooling2D(pool_size = (2, 2), data_format="channels_last"))
classifier.add(BatchNormalization(axis = -1))
classifier.add(Dropout(0.2))
classifier.add(Flatten())
classifier.add(Dense(activation = 'relu', units=512))
classifier.add(BatchNormalization(axis = -1))
classifier.add(Dropout(0.2))
classifier.add(Dense(activation = 'relu', units=256))
classifier.add(BatchNormalization(axis = -1))
classifier.add(Dropout(0.2))
classifier.add(Dense(activation = 'sigmoid', units=2))
classifier.compile(optimizer = 'adam', loss = 'categorical_crossentropy', metrics = ['accuracy'])
I have built a CNN Layer followed by a MaxPooling layer. As you see, it is followed by BatchNormalization to normalize the previous layer’s output and implement the Dropout regularization. Extra collection of these layers after that have appended. After that, we are gonna use Flatten to the outputs. Then we pass the flattened outputs to an Artificial Neural Network that holds three dense layers with (512, 256, 2) nodes. The latest layer that has the activation function sigmoid is the output layer.
The final measure is to compile the CNN model. We are gonna use an optimizer call Adam and this meaning a categorical problem; I have applied the loss function as categorical_crossentropy and an evaluation metric as accuracy.
Train the CNN model and accuracy
I cut the dataset into two categories 20% is the testing set and 80% for training data.
history = classifier.fit(np.array(X_train),
y_train,
batch_size = 64,
verbose = 2,
epochs = 50,
validation_split = 0.1,
shuffle = False)
print("Test_Accuracy: {:.2f}%".format(classifier.evaluate(np.array(X_test), np.array(y_test))[1]*100))
By working with the fit function, we have trained our convolutional neural network with y_train and X_train. we have set the total amounts of epochs as 50 epochs, which is essentially 50 cycles or iterations of the full dataset including a batch size of 64.
The neural network have reached an accuracy of 95.75%.
Dataset augmentation plus accuracy enhancement
Data augmentation benefits grow the dataset and train the neural network on more extra and different data. More available data for CNN to learn from means the greater the model performs. The Keras framework gives a subpackage ImageDataGenerator that can generate this dataset.
Augmentation of Data
from keras.preprocessing.image import ImageDataGenerator
train_generator = ImageDataGenerator(rescale = 1/255,
zoom_range = 0.3,
horizontal_flip = True,
rotation_range = 30)
test_generator = ImageDataGenerator(rescale = 1/255)
train_generator = train_generator.flow(np.array(X_train),
y_train,
batch_size = 64,
shuffle = False)
test_generator = test_generator.flow(np.array(X_test),
y_test,
batch_size = 64,
shuffle = False)
As you noticed above, for our training data, I rescaled the pictures by dividing it by 255, zoomed pictures with a variety of 0.3, After that, flipped the pictures horizontally plus rotated them by 30. And for the remaining data, which is testing data, I only rescale the pictures. The train_generator and test_generator methods are built with a batch size of 64.
Calculating new accuracy
history = classifier.fit_generator(train_generator,
steps_per_epoch = len(X_train)/64,
epochs = 50,
shuffle = False)
print("Test_Accuracy(after augmentation): {:.2f}%".format(classifier.evaluate_generator(test_generator, steps = len(X_test), verbose = 1)[1]*100))
After that, we trained the classifier (the model) by utilizing the fit_generator method and measured that new accuracy.
The neural network has reached an accuracy of 96.41% with our data augmentation.
As you saw above, using the data augmentation, we were able to improve the model accuracy while still owning the same data that we have started with.
Conclusion
In this blog post, we have reviewed the application and the use of CNN models and also data augmentation for Malaria cell pictures and reached a test accuracy of 96.41%.
Note: This is a guest post, and the opinion in this article is of the guest writer. If you have any issues with any of the articles posted at www.marktechpost.com please contact at [email protected]m
|
I am having trouble with Pandas.read_csv
I would like to read this text file (see below) When I take this data and copy it into excel > text to columns > delimited by "Space" it gives me the exact output I am looking for.
I have tried a bunch of different ways, I thought that the regEx to account for multiple spaces would do the trick, but I failed to make it work.
I try this code:
petrelTxt = pd.read_csv(petrelfile, sep = ' ', header = None)
and it gives me the error
CParserError: Error tokenizing data. C error: Expected 6 fields in line 2, saw 17
When I try changing the "sep = '\s+' " it makes it farther down the file, but still does not work.
petrelTxt = pd.read_csv(petrelfile, sep = '\s+', header = None)
CParserError: Error tokenizing data. C error: Expected 5 fields in line 3, saw 6
This is the original txt file:
# WELL TRACE FROM PETREL
# WELL NAME: ZZ-0113
# WELL HEAD X-COORDINATE: 9999999.00000000 (m)
# WELL HEAD Y-COORDINATE: 9999999.00000000 (m)
# WELL KB: 159.00000000 (ft)
# WELL TYPE: OIL
# MD AND TVD ARE REFERENCED (=0) AT KB AND INCREASE DOWNWARDS
# ANGLES ARE GIVEN IN DEGREES
# XYZ TRACE IS GIVEN IN COORDINATE SYSTEM WGS_1924_UTM_Zone_42N
# AZIMUTH REFERENCE TRUE NORTH
# DX DY ARE GIVEN IN GRID NORTH IN m-UNITS
# DEPTH (Z, TVD) GIVEN IN ft-UNITS
#======================================================================================================================================
MD X Y Z TVD DX DY AZIM INCL DLS
#======================================================================================================================================
0.0000000000 999999.00000 9999999.0000 159.00000000 0.0000000000 0.0000005192 -0.000000000 1.3487006929 0.0000000000 0.0000000000
132.00000000 999999.08032 9999999.9116 27.000774702 131.99922530 0.0803153923 -0.088388779 139.08870069 0.3400000000 0.2575757504
221.00000000 999999.19115 9999999.8017 -61.99775149 220.99775149 0.1911487882 -0.198290891 132.93870069 0.3200000000 0.0456726104
|
TensorFlow 1 version View source on GitHub
Represents a potentially large set of elements.
tf.data.Dataset( variant_tensor)
Used in the notebooks
Used in the guide Used in the tutorials
The tf.data.Dataset API supports writing descriptive and efficient inputpipelines. Dataset usage follows a common pattern:
Create a source dataset from your input data.
Apply dataset transformations to preprocess the data.
Iterate over the dataset and process the elements.
Iteration happens in a streaming fashion, so the full dataset does not need to fit into memory.
Source Datasets:
The simplest way to create a dataset is to create it from a python list:
dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3])
for element in dataset:
print(element)
tf.Tensor(1, shape=(), dtype=int32)
tf.Tensor(2, shape=(), dtype=int32)
tf.Tensor(3, shape=(), dtype=int32)
To process lines from files, use tf.data.TextLineDataset:
dataset = tf.data.TextLineDataset(["file1.txt", "file2.txt"])
To process records written in the TFRecord format, use TFRecordDataset:
dataset = tf.data.TFRecordDataset(["file1.tfrecords", "file2.tfrecords"])
To create a dataset of all files matching a pattern, usetf.data.Dataset.list_files:
dataset = tf.data.Dataset.list_files("/path/*.txt") # doctest: +SKIP
Transformations:
Once you have a dataset, you can apply transformations to prepare the data for your model:
dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3])
dataset = dataset.map(lambda x: x*2)
list(dataset.as_numpy_iterator())
[2, 4, 6]
Common Terms:
Element: A single output from calling next() on a dataset iterator. Elements may be nested structures containing multiple components. For example, the element (1, (3, "apple")) has one tuple nested in another tuple. The components are 1, 3, and "apple".
Component: The leaf in the nested structure of an element.
Supported types:
Elements can be nested structures of tuples, named tuples, and dictionaries.Note that Python lists are not treated as nested structures of components.Instead, lists are converted to tensors and treated as components. Forexample, the element (1, [1, 2, 3]) has only two components; the tensor 1and the tensor [1, 2, 3]. Element components can be of any typerepresentable by tf.TypeSpec, including tf.Tensor, tf.data.Dataset,tf.sparse.SparseTensor, tf.RaggedTensor, and tf.TensorArray.
a = 1 # Integer element
b = 2.0 # Float element
c = (1, 2) # Tuple element with 2 components
d = {"a": (2, 2), "b": 3} # Dict element with 3 components
Point = collections.namedtuple("Point", ["x", "y"]) # doctest: +SKIP
e = Point(1, 2) # Named tuple # doctest: +SKIP
f = tf.data.Dataset.range(10) # Dataset element
Args
variant_tensor A DT_VARIANT tensor that represents the dataset.
Attributes
element_spec The type specification of an element of this dataset.
dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3])
dataset.element_spec
TensorSpec(shape=(), dtype=tf.int32, name=None)
Methods
apply
apply( transformation_func)
Applies a transformation function to this dataset.
apply enables chaining of custom Dataset transformations, which arerepresented as functions that take one Dataset argument and return atransformed Dataset.
dataset = tf.data.Dataset.range(100)
def dataset_fn(ds):
return ds.filter(lambda x: x < 5)
dataset = dataset.apply(dataset_fn)
list(dataset.as_numpy_iterator())
[0, 1, 2, 3, 4]
Args
transformation_func A function that takes one Dataset argument andreturns a Dataset.
Returns
Dataset The Dataset returned by applying transformation_func to thisdataset.
as_numpy_iterator
as_numpy_iterator()
Returns an iterator which converts all elements of the dataset to numpy.
Use as_numpy_iterator to inspect the content of your dataset. To seeelement shapes and types, print dataset elements directly instead of usingas_numpy_iterator.
dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3])
for element in dataset:
print(element)
tf.Tensor(1, shape=(), dtype=int32)
tf.Tensor(2, shape=(), dtype=int32)
tf.Tensor(3, shape=(), dtype=int32)
This method requires that you are running in eager mode and the dataset'selement_spec contains only TensorSpec components.
dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3])
for element in dataset.as_numpy_iterator():
print(element)
1
2
3
dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3])
print(list(dataset.as_numpy_iterator()))
[1, 2, 3]
as_numpy_iterator() will preserve the nested structure of datasetelements.
dataset = tf.data.Dataset.from_tensor_slices({'a': ([1, 2], [3, 4]),
'b': [5, 6]})
list(dataset.as_numpy_iterator()) == [{'a': (1, 3), 'b': 5},
{'a': (2, 4), 'b': 6}]
True
Returns
An iterable over the elements of the dataset, with their tensors converted to numpy arrays.
Raises
TypeError if an element contains a non-Tensor value.
RuntimeError if eager execution is not enabled.
batch
batch(
batch_size, drop_remainder=False
)
Combines consecutive elements of this dataset into batches.
dataset = tf.data.Dataset.range(8)
dataset = dataset.batch(3)
list(dataset.as_numpy_iterator())
[array([0, 1, 2]), array([3, 4, 5]), array([6, 7])]
dataset = tf.data.Dataset.range(8)
dataset = dataset.batch(3, drop_remainder=True)
list(dataset.as_numpy_iterator())
[array([0, 1, 2]), array([3, 4, 5])]
The components of the resulting element will have an additional outerdimension, which will be batch_size (or N % batch_size for the lastelement if batch_size does not divide the number of input elements Nevenly and drop_remainder is False). If your program depends on thebatches having the same outer dimension, you should set the drop_remainderargument to True to prevent the smaller batch from being produced.
Args
batch_size A tf.int64 scalar tf.Tensor, representing the number ofconsecutive elements of this dataset to combine in a single batch.
drop_remainder (Optional.) A tf.bool scalar tf.Tensor, representingwhether the last batch should be dropped in the case it has fewer thanbatch_size elements; the default behavior is not to drop the smallerbatch.
Returns
Dataset A Dataset.
cache
cache( filename='')
Caches the elements in this dataset.
The first time the dataset is iterated over, its elements will be cached either in the specified file or in memory. Subsequent iterations will use the cached data.
dataset = tf.data.Dataset.range(5)
dataset = dataset.map(lambda x: x**2)
dataset = dataset.cache()
# The first time reading through the data will generate the data using
# `range` and `map`.
list(dataset.as_numpy_iterator())
[0, 1, 4, 9, 16]
# Subsequent iterations read from the cache.
list(dataset.as_numpy_iterator())
[0, 1, 4, 9, 16]
When caching to a file, the cached data will persist across runs. Even thefirst iteration through the data will read from the cache file. Changingthe input pipeline before the call to .cache() will have no effect untilthe cache file is removed or the filename is changed.
dataset = tf.data.Dataset.range(5)
|
import matplotlib.pyplot as plt
x = range(101)
y = xl
ine, = plt.plot(x,y)
plt.ion()
for i in range(1,100):
y2=[y[i2]/i for i2 in range(len(y))]
line.set_ydata(y2)
plt.draw()
plt.pause(0.05)
Matplotlib縺ァ縲∝虚縺上�励Ο繝�繝医r縺励◆縲�
谺。縲�縺ォ縲√�励Ο繝�繝医@縺溘ョ繝シ繧ソ繧呈峩譁ー縺吶k縺溘a縺ォ縲�
line.set_ydata(y2)
plt.draw()
縺ィ縺励◆縲�
繝励Ο繝�繝医r縲∝セ後°繧牙他縺ウ蜃コ縺励◆繧雁�榊茜逕ィ縺吶k縺溘a縺ォ縲√う繝ウ繧ケ繧ソ繝ウ繧ケ蛹悶@縺溘�ョ縺畦ine縲�
plt.pause(0.05)
縺後↑縺�縺ィ縲√↑縺懊°蜍輔°縺ェ縺�縲�
逅�逕ア縺ッ荳肴�弱��
縺溘□縲∝�・繧後l縺ー縲∝虚縺上�ョ縺ァ縲∝�・繧後k縲�
蟾ョ蛻�譁ケ遞句シ上r蜍牙シキ縺励◆縺�縲�
縺薙�ョ繧オ繧、繝医〒縲∝�ャ髢九&繧後※縺�繧九さ繝シ繝峨r菴ソ縺」縺ヲ縲∝キョ蛻�譁ケ遞句シ上r�シ舌°繧牙級蠑キ縺励h縺�縺ィ縺ヲ縺�繧九′縲}ython縺九Λ繧、繝悶Λ繝ェ縺ョ繝舌�シ繧ク繝ァ繝ウ縺碁&縺�縺溘a縺九�∝虚縺�縺ヲ縺上l縺ェ縺�縲�
縺ィ縺ッ縺�縺医�√さ繝シ繝峨′譖ク縺�縺ヲ縺ゅk繧オ繧、繝医�ッ縲√◎縺�縺ッ縺ェ縺�縺ョ縺ァ縲√%繧後r菴輔→縺句虚縺上h縺�縺ォ縺吶k縲�
for繝ォ繝シ繝励�ョ譛�蠕後↓
plot.pause(0.1)
縺ィ蜈・繧後k縺�縺代〒縲∝虚縺�縺ヲ縺上l縺溘��
�シ�plot縺ィ縺ゅ▲縺溘′縲∵�」萓狗噪縺ォ縺ッplt.pause(0.1)縺ィ縺吶k謇�縲ゑシ�
辭ア莨晏ー弱′繝後Ν繝後Ν蜍輔″縺セ縺吶�ュ縲�
atom繧�繧√�Narkdown here繧定ゥヲ縺吶��
gmail繧ゅ�[arkdown縺ァ縺阪k縺励�√ヶ繝ュ繧ー縺ョ邱ィ髮�逕サ髱「繧ゅ�[arkdown縺ォ縺ァ縺阪k縲�
蜆ェ遘�縺吶℃繧九��
谺�轤ケ縺ィ縺励※縺ッ縲√さ繝シ繝蛾Κ蛻��シ�```hogehoge``` 縺ァ蝗イ繧�驛ィ蛻��シ峨�ッ縲∽サ悶�ョ繝槭�シ繧ッ繝�繧ヲ繝ウ迺ー蠅�縺ァ縺ッ縲∵隼陦後@縺ヲ縺上l縺ヲ縺�縺溘′縲∵隼陦後&繧後↑縺九▲縺溘�Narkdown here縺ョ縺贋ス懈ウ輔°縲√◎繧後→繧ゅ�『indows縺ョ謾ケ陦後さ繝シ繝峨〒縺ッ縺ェ縺�縺�縺代↑縺ョ縺九��
繧キ繝ウ繧ソ繝�繧ッ繧ケ繝上う繝ゥ繧、繝磯Κ蛻�繧偵�∵隼陦後@縺溘°縺」縺溘i縲�騾壼クク縺ョ謾ケ陦後→蜷後§繧医≧縺ォ縲∵隼陦悟セ後↓蜊願ァ偵せ繝壹�シ繧ケ2蛟句�・繧後↑縺�縺ィ縲∵隼陦後@縺ヲ縺上l縺ェ縺�縲�
荳願ィ倥r隧ヲ縺励※繧ゅ�∽ク�譌ヲ縺ッ謾ケ陦後&繧後k繧ゅ�ョ縺ョ縲√�御ソ晏ュ倥☆繧九�阪�懊ち繝ウ繧呈款縺励◆繧峨�∝��縺ォ謌サ縺」縺ヲ縺励∪縺」縺溘��
謾ケ陦後�ッ閾ェ蛻�縺ァ繧�繧峨↑縺�縺ィ繝�繝。繧峨@縺�縲�
繧ゅ@縲�髟キ縺�繧ウ繝シ繝峨□縺」縺溘i縲}ygment繧剃スソ繧上↑縺代l縺ー縺ェ繧峨↑縺�縺九b縺励l縺ェ縺�縲�
荳ュ騾溘ヵ繝シ繝ェ繧ィ螟画鋤 窶セ髮「謨」繝輔�シ繝ェ繧ィ螟画鋤繧医j..
譁ュ髱「莠梧ャ。繝「繝シ繝。繝ウ繝医r縲∝コァ讓咏せ縺ョ驟榊�励°繧芽ィ�..
譁ュ髱「莠梧ャ。繝「繝シ繝。繝ウ繝医r縲∝コァ讓咏せ縺ョ驟榊�励°繧芽ィ�..
font繝輔ぃ繧、繝ォ縺ョ譁�蟄励ョ繝シ繧ソ�シ医げ繝ェ繝包シ峨r..
matplotlib縺ョpyplot.pl..
險育ョ怜鴨蟄ヲ謚�陦楢��隧ヲ鬨薙�ョ蝠城。碁寔 閾ェ轤奇シ郁」∵妙竊�..
python縺ァ縲√�帙Ρ繧、繝医ヮ繧、繧コ繧�繝斐Φ繧ッ繝�..
閼ウ繝峨ャ繧ー縺ォ陦後▲縺ヲ縺阪◆縲や�樽RI縺ョ逕サ蜒上ョ繝シ..
matplotlib縺ョimshow縺ァ蜀�繧�..
matplotlib縺ョ縲…map繧偵�∝セ舌��..
matplotlib縺ョmake_axes..
matplotlib floatinga..
matplotlib plot縺ョ濶イ繧偵�∝�、..
Python縺ァ縲√�御コ梧ャ。蜈�繝輔�シ繝ェ繧ィ螟画鋤縺励◆..
matplotlib縺ョlinestyle..
縺ゥ縺。繧峨′豁」縺励>RGB縺九�ゑシ�matplot..
matplotlib縺ョannotate縺ョ..
matplotlib縺ァ縲』霆ク縺ィ�ス呵サク縺ョ謨ー蟄�..
VBA縺ァ縲}ython縺ョrange縺ィ縺九��..
matplotlib縺ョaxes3D縺ァ縲‖..
|
UNSOLVED Metrics Machine freezing on Groups update
I am working on updating / building groups in Metrics Machine, and I have hit the same blocker 3 times so far today. Iâm wondering if anyone may have any suggestions on how I might avoid it in the future, and I partly hope it might be something that could be further improved in future versions of MM.
The Problem
I have added more glyphs to a font (Name Sans Text Bold, WIP), and I am now updating groups to account for this.
Unfortunately, when I go to âApplyâ those updates, I get a progress bar, âSearching for conflicts...â and it unfortunately hangs on this.

The Output Window includes the following:
Traceback (most recent call last):
File "/Applications/RoboFont.app/Contents/Resources/lib/python3.7/vanilla/vanillaBase.py", line 503, in action_
self.callback(sender)
File "/Users/stephennixon/Library/Application Support/RoboFont/plugins/MetricsMachine.roboFontExt/lib/mm4/interface/groupEditSheet.py", line 188, in applyCallback
needResolution = groups.metricsMachine.applyGroups()
File "/Users/stephennixon/Library/Application Support/RoboFont/plugins/MetricsMachine.roboFontExt/lib/mm4/objects/mmGroups.py", line 1509, in applyGroups
pairs, haveConflict = self._searchForConflict(superPairData, superPair, finalValue)
File "/Users/stephennixon/Library/Application Support/RoboFont/plugins/MetricsMachine.roboFontExt/lib/mm4/objects/mmGroups.py", line 1633, in _searchForConflict
haveConflict = self._tryToCompressPair(superPairData, superPair)
File "/Users/stephennixon/Library/Application Support/RoboFont/plugins/MetricsMachine.roboFontExt/lib/mm4/objects/mmGroups.py", line 1708, in _tryToCompressPair
left = self[superSide1]
File "/Users/stephennixon/Library/Application Support/RoboFont/plugins/MetricsMachine.roboFontExt/lib/mm4/objects/mmGroups.py", line 71, in __getitem__
return self.super()[groupName]
KeyError: 'public.kern1.x'
Traceback (most recent call last):
File "lib/doodleDelegate.pyc", line 96, in sendEvent_
File "/Applications/RoboFont.app/Contents/Resources/lib/python3.7/vanilla/vanillaBase.py", line 503, in action_
self.callback(sender)
File "/Users/stephennixon/Library/Application Support/RoboFont/plugins/MetricsMachine.roboFontExt/lib/mm4/interface/groupEditSheet.py", line 188, in applyCallback
needResolution = groups.metricsMachine.applyGroups()
File "/Users/stephennixon/Library/Application Support/RoboFont/plugins/MetricsMachine.roboFontExt/lib/mm4/objects/mmGroups.py", line 1509, in applyGroups
pairs, haveConflict = self._searchForConflict(superPairData, superPair, finalValue)
File "/Users/stephennixon/Library/Application Support/RoboFont/plugins/MetricsMachine.roboFontExt/lib/mm4/objects/mmGroups.py", line 1633, in _searchForConflict
haveConflict = self._tryToCompressPair(superPairData, superPair)
File "/Users/stephennixon/Library/Application Support/RoboFont/plugins/MetricsMachine.roboFontExt/lib/mm4/objects/mmGroups.py", line 1708, in _tryToCompressPair
left = self[superSide1]
File "/Users/stephennixon/Library/Application Support/RoboFont/plugins/MetricsMachine.roboFontExt/lib/mm4/objects/mmGroups.py", line 71, in __getitem__
return self.super()[groupName]
KeyError: 'public.kern1.x'
Previous times, I have had
KeyError: 'public.kern2.quote'andKeyError: 'public.kern1.quote'.
Sure enough, I did have those groups referenced in my kerning.plist, but not in the groups.plist. Admittedly, this is due to some scripting done outside of MM, so this may be where Talâs hands are clean of the issue.
For example, the first errors were from an earlier generic
quotegroup that was later split intoquoteleft,quoteright, andquotesingle, but as it turns out, even though the groups were updated, the old kerning data hung around to bite me now.
Similarly, I had changed kerning from a
kgroup to anxgroup, but then copied things around between UFOs.
I am actually unsure of how MM might help solve this kind of issue, as it comes from factors outside of MMâs control. But, I suppose, I would much rather MM ran checks before I edited groups, rather than after? It is definitely a bit difficult to spend time editing groups, only to have the app freeze and require a force quit.
Thanks for any potential suggestions if you might know of a good way to efficiently avoid future such clashes (or detect them before editing groups)! At the very least, by posting this, I will hopefully help others avoid or debug the issue for themselves.
I should add, one simple strategy I have found to advance despite this issue is to simply apply my group changes much more frequently and save the changes. That works most of the time, and at the worst, I only lose a little bit of grouping work.
Okay, actually, I think the solution to my problem of finding mismatches is quite simple. Here is the basis of a script I will use to detect this:
"""
Simple script to find problems in UFO kerning, for
https://forum.robofont.com/topic/885/metrics-machine-freezing-on-groups-update/2
USAGE
Run from the command line:
python3 PATH/TO/SCRIPT/check-kerning-vs-groups.py PATH/TO/UFO/family-style.ufo
"""
from fontParts.world import *
import sys
ufoPath = sys.argv[1]
font = OpenFont(ufoPath, showInterface=False)
groups = [group for group in font.groups.keys()]
kerningKeyGroups = set([kern[0] for kern in font.kerning.keys() if "public.kern" in kern[0]])
mismatches = [group for group in kerningKeyGroups if group not in groups]
print(mismatches)
ehum, if the data is
incorrectits hard to find a solution for this issue.
MM is build as such this doesn't happen :)
I dont know what should happen in this case:
MM should notcreate empty kern groups
MM should notreturn empty list for not existing groups as fallback
MM should notremove that pair with the missing group.
Maybe MM should report, so it gets fixed first. (but there are many edge cases to report...)
(Maybe this is a case for ufonormalizer?)
MM should
|
機械学習を用いて夜空に浮かぶ星座を判別したかった話
初めまして。aidemy研修生の笹川です。 これから空気が澄んで星の見やすい季節になりますね。 冬であれば、あれはオリオン座であれはふたご座... そのくらいは言える方が多いかも知れませんが、 もっと難しい星座も見つけられたら格好いいし、 星空を見るのが楽しくなると思いませんか? しかし、何の当てもなく星座を見分けることは容易ではありません。 そこで今回は星座の画像を学習し、機械の力で夜空の星座を判別することができないかやってみようと思います。
目次[非表示]
実行環境
macOS Mojave 10.14 Python 3.6.3 jupyter notebook
データ
星座の種類について
判別を試みる星座は以下の5種類の冬の星座です。 実際の星空ではこのように見えます。
Taurus:おうし座 Orion:オリオン座 Auriga:ぎょしゃ座 Canis_Major:おおいぬ座 Gemini:ふたご座 それぞれ、アルデバラン、ベテルギウス、カペラ、シリウス、ポルックスと特徴的な星を持っており、人間の目であれば それらを目当てにすればどこら辺にあるかを判別することはさほど難しくありません。 都内で星を見ること自体はなかなか難しいですが...
画像収集
奇麗に星座が写った星空の画像はあまりありませんでした。 記事の趣旨とずれる懸念はありましたが、星座の形さえ学習できればよいと考え、 実際の星空でなくてもそれに近い画像は積極的に学習用に採用しました。
とはいえ、専門的に膨大な画像を取り上げているウェブサイトは見つけられなかったため、画像の収集には スクレイピングはあまり使えず、ほぼ手作業となりました。
#ライブラリのインポート
import numpy as np
import cv2
import urllib.request
import os
from PIL import Image
import glob
#フォルダの作成
winter_constellations = ["Taurus", "Orion", "Auriga", "Canis_Minor", "Gemini"]
for const in winter_constellations:
if not os.path.exists("./train/" + const):
os.mkdir("./train/" + const)
if not os.path.exists("./test/" + const):
os.mkdir("./test/" + const)
画像収集例として、サイト「星座入門」から の画像のダウンロードを行ったコードを記載します。
for const in winter_constellations:
url = "http://mirahouse.jp/begin/constellation/" + const + "03.gif"
file = const + "03.gif"
path = "./train/" + const + "/"
if not os.path.exists(path + file):
#ダウンロード実行
urllib.request.urlretrieve(url, path + file)
if os.path.exists(path + file):
#jpgに変換
img = Image.open(path + file)
#"0001.jpg"という名前で保存
img.save(path + "0001.jpg")
#ダウンロードしたgifファイルを削除
os.remove(path + file)
画像増幅
各30枚程度ずつ集め、以下のように回転、閾値処理を行いました。
#回転
angles = [60, 120, 180, 240, 300]
for const in winter_constellations:
path = "./train/" + const + "/"
files = glob.glob(path + "/*.jpg")
for i, file in enumerate(files):
img = Image.open(file)
for angle in angles:
tmp = img.rotate(angle)
tmp.save(path + str(i) + "_" + str(angle) + ".jpg")
#閾値処理
levels = [50, 100, 150, 200, 250]
for const in winter_constellations:
path = "./train/" + const + "/"
files = glob.glob(path + "/*.jpg")
for i,file in enumerate(files):
img = cv2.imread(file)
for level in levels:
thr = lambda x: cv2.threshold(x, level, 255, cv2.THRESH_BINARY)[1]
img = thr(img)
cv2.imwrite(path + str(i) + "_" + str(level) + ".jpg" ,img)
以上のようにして各400枚程度作成した画像データを、8:2の割合で学習用と評価用に分けました。 フォルダごとに以下のコードを実行し、画像ファイルは連番にしました。
ls *.jpg | awk '{ printf "mv %s %04d.jpg\n", $0, NR }' | sh
分類
CNNによる分類
まずはCNNという手法を使って分類を試みます。 kerasが公開しているexampleから、minst_cnnのモデルをお借りしました。
#ライブラリ等のインポート
import keras
from keras.utils import np_utils
from keras.layers.convolutional import Conv2D, MaxPooling2D
from keras.models import Sequential
from keras.layers.core import Dense, Dropout, Activation, Flatten
import numpy as np
import glob
import cv2
from PIL import Image
#基本情報の設定
winter_constellations = ["Taurus", "Orion", "Auriga", "Canis_Minor", "Gemini"]
image_size = 224
#学習データの設定
train_X = []
train_Y = []
for index, const in enumerate(winter_constellations):
path = "./" + const
files = glob.glob(path + "/*.jpg")
for file in files:
image = Image.open(file)
image = image.convert("RGB")
image = image.resize((image_size, image_size))
data = np.asarray(image)
train_X.append(data)
train_Y.append(index)
train_X = np.asarray(train_X)
train_Y = np.asarray(train_Y)
#正規化
train_X = train_X.astype("float32")
train_X = train_X/255.0
#one-hot表現に変換
train_Y = np_utils.to_categorical(train_Y, 5)
#モデルの設定
batch_size = 128
num_classes = 5
model = Sequential()
model.add(Conv2D(32, kernel_size=(3, 3),
activation='relu',
input_shape=train_X.shape[1:]))
model.add(Conv2D(64, (3, 3), activation='relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.25))
model.add(Flatten())
model.add(Dense(128, activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(num_classes, activation='softmax'))
model.compile(loss=keras.losses.categorical_crossentropy,
optimizer=keras.optimizers.Adadelta(),
metrics=['accuracy'])
#学習の実行
history = model.fit(train_X, train_Y,
batch_size=batch_size,
epochs=50,
verbose=1,
validation_data=(test_X, test_Y))
#学習の推移
Train on 1627 samples, validate on 407 samples
Epoch 1/50
1627/1627 [==============================] - 13s 7ms/step - loss: 1.9747 - acc: 0.3183 - val_loss: 2.3256 - val_acc: 0.1111
Epoch 2/50
1627/1627 [==============================] - 12s 6ms/step - loss: 1.1243 - acc: 0.7020 - val_loss: 2.6229 - val_acc: 0.1778
Epoch 3/50
1627/1627 [==============================] - 12s 6ms/step - loss: 0.5156 - acc: 0.8563 - val_loss: 2.6733 - val_acc: 0.2667
…
Epoch 47/50
1627/1627 [==============================] - 13s 7ms/step - loss: 0.0502 - acc: 0.9862 - val_loss: 10.5200 - val_acc: 0.1556
Epoch 48/50
1627/1627 [==============================] - 12s 7ms/step - loss: 0.0469 - acc: 0.9878 - val_loss: 10.5937 - val_acc: 0.2222
Epoch 49/50
1627/1627 [==============================] - 12s 7ms/step - loss: 0.0683 - acc: 0.9851 - val_loss: 10.6505 - val_acc: 0.1556
Epoch 50/50
1627/1627 [==============================] - 12s 7ms/step - loss: 0.0503 - acc: 0.9867 - val_loss: 10.2061 - val_acc: 0.1778
正答率は全く上がりませんでした。 ひとまず違う方法に移ることにしました。
転移学習の利用
以下のウェブサイトを参考に転移学習を行うことにしました。 ・少ない画像から画像分類を学習させる方法(kerasで転移学習:fine tuning) | SPJ ・Keras / Tensorflowで転移学習を行う - Qiita VGG16という学習済みのモデルを使います。 「学習用のデータが少ない場合に有効」という文言に強く惹かれたわけですが、星座の判別に 適当かどうかは定かではありませんでした。
#ライブラリ等のインポート
import keras
from keras.applications.vgg16 import VGG16
from keras.preprocessing.image import ImageDataGenerator
from keras.layers import Dense, GlobalAveragePooling2D, Input
import keras.callbacks
from keras.layers.convolutional import Conv2D, MaxPooling2D
from keras.models import Model, Sequential
from keras.layers.core import Dense, Dropout, Activation, Flatten
#基本情報の設定
N_CATEGORIES = 5
IMAGE_SIZE = 224
BATCH_SIZE = 32
NUM_TRAINING = 2000
NUM_VALIDATION = 400
#モデルの設定
input_tensor = Input(shape=(IMAGE_SIZE, IMAGE_SIZE, 3))
base_model = VGG16(weights='imagenet', include_top=False,input_tensor=input_tensor)
x = base_model.output
x = GlobalAveragePooling2D()(x)
x = Dense(1024, activation='relu')(x)
predictions = Dense(N_CATEGORIES, activation='softmax')(x)
model = Model(inputs=base_model.input, outputs=predictions)
for layer in base_model.layers:
layer.trainable = False
from keras.optimizers import SGD
model.compile(optimizer=SGD(lr=0.0001, momentum=0.9), loss='categorical_crossentropy',metrics=['accuracy'])
model.summary()
#学習用、評価用データの作成
#ImageDataGeneratorは回転やズームを自動的に行い枚数を水増ししてくれます。
image_data_generator = ImageDataGenerator(rescale=1.0/255)
train_data = image_data_generator.flow_from_directory(
'./train',
target_size=(224, 224),
batch_size=32,
class_mode='categorical',
shuffle=True
)
image_data_generator = ImageDataGenerator(rescale=1.0/255)
validation_data = image_data_generator.flow_from_directory(
'./test',
target_size=(224, 224),
batch_size=32,
class_mode='categorical',
shuffle=True
)
#学習の実行
histry = model.fit_generator(train_data,
steps_per_epoch = NUM_TRAINING//BATCH_SIZE,
epochs=50,
verbose=1,
validation_data = validation_data,
validation_steps = NUM_VALIDATION//BATCH_SIZE,
)
Epoch 1/50
56/56 [==============================] - 1298s 23s/step - loss: 2.2271 - acc: 0.1439 - val_loss: 2.2002 - val_acc: 0.0185
Epoch 2/50
56/56 [==============================] - 1145s 20s/step - loss: 2.1892 - acc: 0.1686 - val_loss: 2.1966 - val_acc: 0.1111
Epoch 3/50
56/56 [==============================] - 1175s 21s/step - loss: 2.1831 - acc: 0.1578 - val_loss: 2.1955 - val_acc: 0.0370
Epoch 4/50
56/56 [==============================] - 1145s 20s/step - loss: 2.1809 - acc: 0.1837 - val_loss: 2.1951 - val_acc: 0.1296
Epoch 5/50
56/56 [==============================] - 1130s 20s/step - loss: 2.1751 - acc: 0.1755 - val_loss: 2.1957 - val_acc: 0.1852
Epoch 6/50
56/56 [==============================] - 1156s 21s/step - loss: 2.1742 - acc: 0.1836 - val_loss: 2.1959 - val_acc: 0.1111
Epoch 7/50
56/56 [==============================] - 1159s 21s/step - loss: 2.1684 - acc: 0.2077 - val_loss: 2.1943 - val_acc: 0.0556
ここまで行って(2時間弱)、時間のかかる割に正答率が高まらないと思い、中断しました。
物体検出
この問題は、画像分類の問題ではなく、画像のセグメンテーションや領域抽出 ないしは物体検出の問題ではないかと思い始めました。 そして、以下の二つの方法を考えました。
①U-Net(U字型のニューラルネットワーク)による領域抽出
参考ページ:
②AWSのSageMakerで提供されているアルゴリズムを用いた物体検出
より手軽そうな後者を選択しました。
SageMakerによる物体検出
ベースとなるニューラルネットはResNetです。
上記ウェブサイト及び、Amazon SageMakerのexampleとしてある Object Detection using the Image and JSON formatを参考にしています。 使用する画像データは前述した処理で増幅しています。
アノテーション
この方法で学習に更に必要となるのが「アノテーション」という段階です。 各画像のどこに何があるかという情報を手作業で作成し、jsonファイルとして出力します。 その際に用いるのが、VoTT(Visual Object Tagging Tool)です。
各星座を囲み、どの星座であるかを記録します。 上図はOrionとTaurusを囲んでいます。 この作業を繰り返すと、一つのフォルダに入った複数の画像に対して、一つのjsonファイルが出力されます。 それを以下のコードで個々の画像に対応したものとします。 このコードでは学習を行う際に画像が連番となっていることが必須なので、 アノテーションの前に必ず連番にしておきましょう。(1敗)
import json
file_name = './annotation.json'
class_list = {'Taurus':0, 'Orion':1, 'Gemini':2}
#ここではぎょしゃ座とおおいぬ座を削り、3種類の星座で行っています。
with open(file_name) as f:
js = json.load(f)
for k, v in js['frames'].items():
k = int(k)
line = {}
line['file'] = '{0:04d}'.format(k+1) + '.jpg'
line['image_size'] = [{
'width':int(v[0]['width']),
'height':int(v[0]['height']),
'depth':3
}]
line['annotations'] = []
for annotation in v:
line['annotations'].append(
{
'class_id':class_list[annotation['tags'][0]],
'top':int(annotation['y1']),
'left':int(annotation['x1']),
'width':int(annotation['x2'])- int(annotation['x1']),
'height':int(annotation['y2']-int(annotation['y1']))
}
)
line['categories'] = []
for name, class_id in class_list.items():
line['categories'].append(
{
'class_id':class_id,
'name':name
}
)
f = open('./json/'+'{0:04d}'.format(k+1) + '.json', 'w')
json.dump(line, f)
これを実行すると、画像の枚数分だけjsonファイルが作成されます。
データのアップロード
次に、SageMakerから作成した画像データとjsonデータにアクセスできるようにするため、 Amazon S3にアップロードします。 S3の作成の際にはバケット名に「sagemaker」を含めるようにしてください。 そして、SageMakerのインスタンスを作成します。
SageMakerで学習モデル構築/エンドポイント作成
インスタンスの作成後、実行環境となるjupyter notebookを開き、 Newからnotebookのconda_mxnet_p36を作成します。
import sagemaker
from sagemaker import get_execution_role
role = get_execution_role()
print(role)
sess = sagemaker.Session()
bucket = 'bucketの名前' #作成したバケット名を記入
#学習用イメージURLの取得
from sagemaker.amazon.amazon_estimator import get_image_uri
training_image = get_image_uri(sess.boto_region_name, 'object-detection', repo_version="latest")
print (training_image)
#入力データの設定
train_channel = 'train'
validation_channel = 'validation'
train_annotation_channel = 'train_annotation'
validation_annotation_channel = 'validation_annotation'
s3_train_data = 's3://{}/{}'.format(bucket, train_channel)
s3_validation_data = 's3://{}/{}'.format(bucket, validation_channel)
s3_train_annotation = 's3://{}/{}'.format(bucket, train_annotation_channel)
s3_validation_annotation = 's3://{}/{}'.format(bucket, validation_annotation_channel)
s3_output_location = 's3://{}/{}/output'.format(bucket, prefix)
#アルゴリズム設定
od_model = sagemaker.estimator.Estimator(training_image,
role,
train_instance_count=1,
train_instance_type='ml.t2.medium',
train_volume_size = 50,
train_max_run = 360000,
input_mode = 'File',
output_path=s3_output_location,
sagemaker_session=sess)
#ハイパーパラメーターの設定
od_model.set_hyperparameters(base_network='resnet-50',
use_pretrained_model=1,
num_classes=3,
mini_batch_size=10,
epochs=50,
learning_rate=0.001,
lr_scheduler_step='10',
lr_scheduler_factor=0.1,
optimizer='sgd',
momentum=0.9,
weight_decay=0.0005,
overlap_threshold=0.5,
nms_threshold=0.45,
image_shape=512,
label_width=350,
num_training_samples=360)
#データチャネルとアルゴリズムの間でハンドシェイク
train_data = sagemaker.session.s3_input(s3_train_data, distribution='FullyReplicated',
content_type='image/jpeg', s3_data_type='S3Prefix')
validation_data = sagemaker.session.s3_input(s3_validation_data, distribution='FullyReplicated',
content_type='image/jpeg', s3_data_type='S3Prefix')
train_annotation = sagemaker.session.s3_input(s3_train_annotation, distribution='FullyReplicated',
content_type='image/jpeg', s3_data_type='S3Prefix')
validation_annotation = sagemaker.session.s3_input(s3_validation_annotation, distribution='FullyReplicated',
content_type='image/jpeg', s3_data_type='S3Prefix')
data_channels = {'train': train_data, 'validation': validation_data,
'train_annotation': train_annotation, 'validation_annotation':validation_annotation}
#学習の実行とモデルの作成
od_model.fit(inputs=data_channels, logs=True)
20分程かかりました。
#エンドポイント作成
object_detector = od_model.deploy(initial_instance_count = 1,
instance_type = 'ml.t2.medium')
推論
以上の処理が完了した後、いくつかの画像をjupyter notebookにアップロードし、推論を行いました。
file_name = 'test1.jpg'
with open(file_name, 'rb') as image:
f = image.read()
b = bytearray(f)
ne = open('n.txt','wb')
ne.write(b)
import json
object_detector.content_type = 'image/jpeg'
results = object_detector.predict(b)
detections = json.loads(results)
print (detections)
jsonの形式で推論の結果が出力されます。数字の羅列で何が何だかわからないので以下のコードで可視化します。
def visualize_detection(img_file, dets, classes=[], thresh=0.6):
"""
visualize detections in one image
Parameters:
----------
img : numpy.array
image, in bgr format
dets : numpy.array
ssd detections, numpy.array([[id, score, x1, y1, x2, y2]...])
each row is one object
classes : tuple or list of str
class names
thresh : float
score threshold
"""
import random
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
img=mpimg.imread(img_file)
plt.imshow(img)
height = img.shape[0]
width = img.shape[1]
colors = dict()
for det in dets['prediction']:
(klass, score, x0, y0, x1, y1) = det
if score < thresh:
continue
cls_id = int(klass)
if cls_id not in colors:
colors[cls_id] = (random.random(), random.random(), random.random())
xmin = int(x0 * width)
ymin = int(y0 * height)
xmax = int(x1 * width)
ymax = int(y1 * height)
rect = plt.Rectangle((xmin, ymin), xmax - xmin,
ymax - ymin, fill=False,
edgecolor=colors[cls_id],
linewidth=3.5)
plt.gca().add_patch(rect)
class_name = str(cls_id)
if classes and len(classes) > cls_id:
class_name = classes[cls_id]
plt.gca().text(xmin, ymin - 2,
'{:s} {:.3f}'.format(class_name, score),
bbox=dict(facecolor=colors[cls_id], alpha=0.5),
fontsize=12, color='white')
plt.show()
object_categories = ['Taurus', 'Orion', 'Gemini']
# Setting a threshold 0.20 will only plot detection results that have a confidence score greater than 0.20.
#thresholdとは閾値のことです。
threshold = 0.2
# Visualize the detections.
visualize_detection(file_name, detections, object_categories, threshold)
結果は、以下のようになり、何も検出されませんでした。
オリオン座くらいは検出されてもいいのではないかと思いましたが、 閾値の高低は関係なく検出されない、という結果となりました。
考察
分類や検出が全くうまくいかなかった原因を考えました。
・星空に映る星座の画像が少なかった まず、画像データがあまりなかったことが挙げられます。この時点でこのテーマを断念すべきでしたが、 画像増幅によってなんとかできると楽観視していました。
・パラメーターの調整が不適切 これに関しては完全に知識不足です。ドキュメントを読み直してより良いモデルの構築ができるよう 励んでいきます。
・星座の画像は学習するのには適していない 黒い背景に無数の白い点がある中で、特定の疎らな点の並びを「あるもの」 と判別することは、人間の目でも難しいので、無理があったのかもしれません。 しかし、白黒画像からの分類等も多く行われているので、 力不足であったことは確かです。 閾値処理などをもっとうまくできれば判別できる可能性もあります。
また、星座の形(星の並び方)をよく学習できれば希望はあるかも 知れないと思いました。 どういう風にそれが実装できるかは更に 考えていきたいです。
まとめ
・CNN、VGG16の転移学習、AWSのSageMakerでの物体検出という、 機械学習の三つの方法で、夜空の星座を判別できないかを試みました。
・全ての方法で惨敗し、判別ができないという結果となりました。
・データの更なる収集やパラメーターの調整など、学習の方法を改善できる 箇所が多くあると考えられました。 筆者は星の案内人という資格を持っていて、より星の良さを広められたらという思いがあり、 このテーマに挑戦しました。
Star Chartなどのスマートフォンアプリでは緯度、経度を入力した上で、方角や端末の傾きに応じて星座を表示しています。 やはり実際の星空から機械によって判別することは難しいのかも知れません。
よく考えると、古き良き星座早見盤を使って頑張って星座を探すのも楽しみの一つ。 皆さんも、空気の澄んだ夜には空を見上げてみるといいことがあるかも知れません。 身も蓋もない記事となってしまいましたが、読んでいただきありがとうございました。
PythonやAIプログラミングを学ぶなら、オンライン制スクールのアイデミープレミアムプラン。
「機械学習・ディープラーニングに興味がある」
「AIをどのように活用するのだろう?」
「文系の私でもプログラミング学習を続けられるだろうか?」
少しでも気になることがございましたら、ぜひお気軽にAidemy Premium Planの無料相談会にお越しいただき、お悩みをお聞かせください!
このほかにも、Aidemy MagazineとTwitter(@AidemyMagazine)ではたくさんのAI活用事例をご紹介しています。どちらも要チェック!
|
Explicit Reporting - Jupyter Notebook
Trains is now ClearML
This documentation applies to the legacy Trains versions. For the latest documentation, see ClearML.
The Allegro_Trains_logging_example.ipynb example demonstrates integrating Trains explicit reporting running in a Jupyter Notebook. All Trains explicit reporting works with Jupyter Notebook. This example includes several types of explicit reporting, including scalars, some plots, and media.
Open in Google Colab
In the trains GitHub repository, this example includes a clickable icon to open the notebook in Google Colab.
Scalars
To reports scalars, call the Logger.report_scalar method. The scalar plots appear in RESULTS > SCALARS.
# report two scalar series on two different graphs
for i in range(10):
logger.report_scalar("graph A", "series A", iteration=i, value=1./(i+1))
logger.report_scalar("graph B", "series B", iteration=i, value=10./(i+1))
# report two scalar series on the same graph
for i in range(10):
logger.report_scalar("unified graph", "series A", iteration=i, value=1./(i+1))
logger.report_scalar("unified graph", "series B", iteration=i, value=10./(i+1))
Plots
Plots appear in RESULTS > PLOTS.
2D Plots
Report 2D scatter plots by calling the Logger.report_scatter2d method. Use the mode parameter to plot data points as markers, or both lines and markers.
scatter2d = np.hstack(
(np.atleast_2d(np.arange(0, 10)).T, np.random.randint(10, size=(10, 1)))
)
# report 2d scatter plot with markers
logger.report_scatter2d(
"example_scatter",
"series_lines+markers",
iteration=iteration,
scatter=scatter2d,
xaxis="title x",
yaxis="title y",
mode='lines+markers'
)
3D Plots
To plot a series as a 3-dimensional scatter plot, use the Logger.report_scatter3d method.
# report 3d scatter plot
scatter3d = np.random.randint(10, size=(10, 3))
logger.report_scatter3d(
"example_scatter_3d",
"series_xyz",
iteration=iteration,
scatter=scatter3d,
xaxis="title x",
yaxis="title y",
zaxis="title z",
)
To plot a series as a surface plot, use the Logger.report_surface method.
# report 3d surface
surface = np.random.randint(10, size=(10, 10))
logger.report_surface(
"example_surface",
"series1",
iteration=iteration,
matrix=surface,
xaxis="title X",
yaxis="title Y",
zaxis="title Z",
)
Confusion matrices
Report confusion matrices by calling the Logger.report_matrix method.
# report confusion matrix
confusion = np.random.randint(10, size=(10, 10))
logger.report_matrix(
"example_confusion",
"ignored",
iteration=iteration,
matrix=confusion,
xaxis="title X",
yaxis="title Y",
)
Histograms
Report histograms by calling the Logger.report_histogram method. To report more than one series on the same plot, use same the title argument.
# report a single histogram
histogram = np.random.randint(10, size=10)
logger.report_histogram(
"single_histogram",
"random histogram",
iteration=iteration,
values=histogram,
xaxis="title x",
yaxis="title y",
)
# report a two histograms on the same plot
histogram1 = np.random.randint(13, size=10)
histogram2 = histogram * 0.75
logger.report_histogram(
"two_histogram",
"series 1",
iteration=iteration,
values=histogram1,
xaxis="title x",
yaxis="title y",
)
logger.report_histogram(
"two_histogram",
"series 2",
iteration=iteration,
values=histogram2,
xaxis="title x",
yaxis="title y",
)
Media
Report audio, HTML, image, and video by calling the Logger.report_media method using the local_path parameter. They appear in RESULTS > DEBUG SAMPLES.
The media for these examples is downloaded using the StorageManager.get_local_copy method.
For example, download an image:
image_local_copy = StorageManager.get_local_copy(
remote_url="https://pytorch.org/tutorials/_static/img/neural-style/picasso.jpg",
name="picasso.jpg"
)
Audio
logger.report_media('audio', 'pink panther', iteration=1, local_path=audio_local_copy)
HTML
logger.report_media("html", "url_html", iteration=1, url="https://allegro.ai/docs/index.html")
Images
logger.report_image("image", "image from url", iteration=100, local_path=image_local_copy)
Video
logger.report_media('video', 'big bunny', iteration=1, local_path=video_local_copy)
Text
Report text messages by calling the Logger.report_text.
logger.report_text("hello, this is plain text")
|
OpenStack task_state in Python API
Hello guys!
I have a problem that I need to get task_state of a VM via Python but I don't know how. Here's the thing:
{
"server": {
"OS-DCF:diskConfig": "AUTO",
"OS-EXT-AZ:availability_zone": "nova",
"OS-EXT-SRV-ATTR:host": "compute",
"OS-EXT-SRV-ATTR:hostname": "new-server-test",
"OS-EXT-SRV-ATTR:hypervisor_hostname": "fake-mini",
"OS-EXT-SRV-ATTR:instance_name": "instance-00000001",
"OS-EXT-SRV-ATTR:kernel_id": "",
"OS-EXT-SRV-ATTR:launch_index": 0,
"OS-EXT-SRV-ATTR:ramdisk_id": "",
"OS-EXT-SRV-ATTR:reservation_id": "r-ov3q80zj",
"OS-EXT-SRV-ATTR:root_device_name": "/dev/sda",
"OS-EXT-SRV-ATTR:user_data": "IyEvYmluL2Jhc2gKL2Jpbi9zdQplY2hvICJJIGFtIGluIHlvdSEiCg==",
"OS-EXT-STS:power_state": 1,
"OS-EXT-STS:task_state": null,
"OS-EXT-STS:vm_state": "active",
"OS-SRV-USG:launched_at": "2017-02-14T19:23:59.895661",
"OS-SRV-USG:terminated_at": null,
"accessIPv4": "1.2.3.4",
"accessIPv6": "80fe::",
"addresses": {
"private": [
{
"OS-EXT-IPS-MAC:mac_addr": "aa:bb:cc:dd:ee:ff",
"OS-EXT-IPS:type": "fixed",
"addr": "192.168.0.3",
"version": 4
}
]
},
"config_drive": "",
"created": "2017-02-14T19:23:58Z",
"description": null,
}
}
If I want to get e.g. created I can use this:
return server.created
but when I want to use task_state I cannot use this:
return server.OS-EXT-STS:task_state
Can you please give me some ideas how to solve this ?
I already tried to iterate through the keys but it says that Server (object) is not iterable, neither has keys() nor values(). I ran out of ideas :D Here's my code:
def server_exists(name, smn=False):
# create a session
nova_client = Client_nova(session=get_session(), version=2)
# creates a list of servers
servers_list = nova_client.servers.list()
# search the server in the list
for s in servers_list:
if s.name == name:
return s
return False
def get_node_task(node_cloud_id):
server = server_exists(node_cloud_id)
if not server:
raise RuntimeError("Server does not exist")
else:
#TODO
Thanks!
|
Recently, after a round of patching (11/13) all of the servers stopped authenticating ssh sessions using AD creds. However, if i login to a server with root, i can su to the AD account without issue.
I think a patch changed the server behavior but i cannot figure out where the issue is. here are some logs that might help
I would really appreciate any ones input.
/var/log/sssd/sssd_somedomain.com:
Code: Select all
(2020-11-24 13:51:21): [be[somedomain.com]] [sbus_dispatch] (0x4000): dbus conn: 0x561688c87cd0
(2020-11-24 13:51:21): [be[somedomain.com]] [sbus_dispatch] (0x4000): Dispatching.
(2020-11-24 13:51:21): [be[somedomain.com]] [sbus_message_handler] (0x2000): Received SBUS method org.freedesktop.sssd.dataprovider.getAccountInfo on path /org/freedesktop/sssd/dataprovider
(2020-11-24 13:51:21): [be[somedomain.com]] [sbus_get_sender_id_send] (0x2000): Not a sysbus message, quit
(2020-11-24 13:51:21): [be[somedomain.com]] [dp_get_account_info_handler] (0x0200): Got request for [0x3][BE_REQ_INITGROUPS][name=tonyg@somedomain.com]
(2020-11-24 13:51:21): [be[somedomain.com]] [sss_domain_get_state] (0x1000): Domain somedomain.com is Active
(2020-11-24 13:51:21): [be[somedomain.com]] [merge_msg_ts_attrs] (0x2000): No such DN in the timestamp cache: name=tonyg@somedomain.com,cn=users,cn=somedomain.com,cn=sysdb
(2020-11-24 13:51:21): [be[somedomain.com]] [sysdb_merge_res_ts_attrs] (0x2000): TS cache doesn't contain this DN, skipping
(2020-11-24 13:51:21): [be[somedomain.com]] [dp_attach_req] (0x0400): DP Request [Initgroups #47]: New request. Flags [0x0001].
(2020-11-24 13:51:21): [be[somedomain.com]] [dp_attach_req] (0x0400): Number of active DP request: 1
(2020-11-24 13:51:21): [be[somedomain.com]] [sss_domain_get_state] (0x1000): Domain somedomain.com is Active
(2020-11-24 13:51:21): [be[somedomain.com]] [sss_domain_get_state] (0x1000): Domain somedomain.com is Active
(2020-11-24 13:51:21): [be[somedomain.com]] [merge_msg_ts_attrs] (0x2000): No such DN in the timestamp cache: name=tonyg@somedomain.com,cn=users,cn=somedomain.com,cn=sysdb
(2020-11-24 13:51:21): [be[somedomain.com]] [sysdb_merge_res_ts_attrs] (0x2000): TS cache doesn't contain this DN, skipping
(2020-11-24 13:51:21): [be[somedomain.com]] [merge_msg_ts_attrs] (0x2000): No such DN in the timestamp cache: name=tonyg@somedomain.com,cn=users,cn=somedomain.com,cn=sysdb
(2020-11-24 13:51:21): [be[somedomain.com]] [sysdb_merge_res_ts_attrs] (0x2000): TS cache doesn't contain this DN, skipping
(2020-11-24 13:51:21): [be[somedomain.com]] [check_if_pac_is_available] (0x4000): No PAC available.
(2020-11-24 13:51:21): [be[somedomain.com]] [sdap_id_op_connect_step] (0x4000): reusing cached connection
(2020-11-24 13:51:21): [be[somedomain.com]] [sdap_get_initgr_send] (0x4000): Retrieving info for initgroups call
(2020-11-24 13:51:21): [be[somedomain.com]] [get_ldap_conn_from_sdom_pvt] (0x4000): Returning LDAP connection for user lookup.
(2020-11-24 13:51:21): [be[somedomain.com]] [sdap_id_op_connect_step] (0x4000): reusing cached connection
(2020-11-24 13:51:21): [be[somedomain.com]] [sdap_get_initgr_next_base] (0x0400): Searching for users with base [DC=ffusa,DC=com]
(2020-11-24 13:51:21): [be[somedomain.com]] [sdap_print_server] (0x2000): Searching 172.19.4.40:389
(2020-11-24 13:51:21): [be[somedomain.com]] [sdap_get_generic_ext_step] (0x0400): calling ldap_search_ext with [(&(sAMAccountName=tonyg)(objectclass=user)(objectSID=*))][DC=ffusa,DC=com].
(2020-11-24 13:51:21): [be[somedomain.com]] [sdap_get_generic_ext_step] (0x1000): Requesting attrs: [objectClass]
(2020-11-24 13:51:21): [be[somedomain.com]] [sdap_get_generic_ext_step] (0x1000): Requesting attrs: [sAMAccountName]
(2020-11-24 13:51:21): [be[somedomain.com]] [sdap_get_generic_ext_step] (0x1000): Requesting attrs: [unixUserPassword]
(2020-11-24 13:51:21): [be[somedomain.com]] [sdap_get_generic_ext_step] (0x1000): Requesting attrs: [uidNumber]
(2020-11-24 13:51:21): [be[somedomain.com]] [sdap_get_generic_ext_step] (0x1000): Requesting attrs: [gidNumber]
(2020-11-24 13:51:21): [be[somedomain.com]] [sdap_get_generic_ext_step] (0x1000): Requesting attrs: [gecos]
(2020-11-24 13:51:21): [be[somedomain.com]] [sdap_get_generic_ext_step] (0x1000): Requesting attrs: [unixHomeDirectory]
(2020-11-24 13:51:21): [be[somedomain.com]] [sdap_get_generic_ext_step] (0x1000): Requesting attrs: [loginShell]
(2020-11-24 13:51:21): [be[somedomain.com]] [sdap_get_generic_ext_step] (0x1000): Requesting attrs: [userPrincipalName]
(2020-11-24 13:51:21): [be[somedomain.com]] [sdap_get_generic_ext_step] (0x1000): Requesting attrs: [name]
(2020-11-24 13:51:21): [be[somedomain.com]] [sdap_get_generic_ext_step] (0x1000): Requesting attrs: [memberOf]
(2020-11-24 13:51:21): [be[somedomain.com]] [sdap_get_generic_ext_step] (0x1000): Requesting attrs: [objectGUID]
(2020-11-24 13:51:21): [be[somedomain.com]] [sdap_get_generic_ext_step] (0x1000): Requesting attrs: [objectSID]
(2020-11-24 13:51:21): [be[somedomain.com]] [sdap_get_generic_ext_step] (0x1000): Requesting attrs: [primaryGroupID]
(2020-11-24 13:51:21): [be[somedomain.com]] [sdap_get_generic_ext_step] (0x1000): Requesting attrs: [whenChanged]
(2020-11-24 13:51:21): [be[somedomain.com]] [sdap_get_generic_ext_step] (0x1000): Requesting attrs: [uSNChanged]
(2020-11-24 13:51:21): [be[somedomain.com]] [sdap_get_generic_ext_step] (0x1000): Requesting attrs: [accountExpires]
(2020-11-24 13:51:21): [be[somedomain.com]] [sdap_get_generic_ext_step] (0x1000): Requesting attrs: [userAccountControl]
(2020-11-24 13:51:21): [be[somedomain.com]] [sdap_get_generic_ext_step] (0x1000): Requesting attrs: [userCertificate;binary]
(2020-11-24 13:51:21): [be[somedomain.com]] [sdap_get_generic_ext_step] (0x1000): Requesting attrs: [mail]
(2020-11-24 13:51:21): [be[somedomain.com]] [sdap_get_generic_ext_step] (0x2000): ldap_search_ext called, msgid = 6
(2020-11-24 13:51:21): [be[somedomain.com]] [sdap_op_add] (0x2000): New operation 6 timeout 6
(2020-11-24 13:51:21): [be[somedomain.com]] [sdap_process_result] (0x2000): Trace: sh[0x561688c76dd0], connected[1], ops[0x561688cad9a0], ldap[0x561688c9f400]
(2020-11-24 13:51:21): [be[somedomain.com]] [sdap_process_message] (0x4000): Message type: [LDAP_RES_SEARCH_ENTRY]
(2020-11-24 13:51:21): [be[somedomain.com]] [sdap_parse_entry] (0x1000): OriginalDN: [CN=Tony Guadagno,CN=Users,DC=ffusa,DC=com].
(2020-11-24 13:51:21): [be[somedomain.com]] [sdap_parse_range] (0x2000): No sub-attributes for [objectClass]
(2020-11-24 13:51:21): [be[somedomain.com]] [sdap_parse_range] (0x2000): No sub-attributes for [whenChanged]
(2020-11-24 13:51:21): [be[somedomain.com]] [sdap_parse_range] (0x2000): No sub-attributes for [memberOf]
(2020-11-24 13:51:21): [be[somedomain.com]] [sdap_parse_range] (0x2000): No sub-attributes for [uSNChanged]
(2020-11-24 13:51:21): [be[somedomain.com]] [sdap_parse_range] (0x2000): No sub-attributes for [name]
(2020-11-24 13:51:21): [be[somedomain.com]] [sdap_parse_range] (0x2000): No sub-attributes for [objectGUID]
(2020-11-24 13:51:21): [be[somedomain.com]] [sdap_parse_range] (0x2000): No sub-attributes for [userAccountControl]
(2020-11-24 13:51:21): [be[somedomain.com]] [sdap_parse_range] (0x2000): No sub-attributes for [primaryGroupID]
(2020-11-24 13:51:21): [be[somedomain.com]] [sdap_parse_range] (0x2000): No sub-attributes for [objectSid]
(2020-11-24 13:51:21): [be[somedomain.com]] [sdap_parse_range] (0x2000): No sub-attributes for [accountExpires]
(2020-11-24 13:51:21): [be[somedomain.com]] [sdap_parse_range] (0x2000): No sub-attributes for [sAMAccountName]
(2020-11-24 13:51:21): [be[somedomain.com]] [sdap_parse_range] (0x2000): No sub-attributes for [userPrincipalName]
(2020-11-24 13:51:21): [be[somedomain.com]] [sdap_process_result] (0x2000): Trace: sh[0x561688c76dd0], connected[1], ops[0x561688cad9a0], ldap[0x561688c9f400]
(2020-11-24 13:51:21): [be[somedomain.com]] [sdap_process_message] (0x4000): Message type: [LDAP_RES_SEARCH_REFERENCE]
(2020-11-24 13:51:21): [be[somedomain.com]] [sdap_get_generic_ext_add_references] (0x1000): Additional References: ldap://ForestDnsZones.somedomain.com/DC=ForestDnsZones,DC=ffusa,DC=com
(2020-11-24 13:51:21): [be[somedomain.com]] [sdap_process_result] (0x2000): Trace: sh[0x561688c76dd0], connected[1], ops[0x561688cad9a0], ldap[0x561688c9f400]
(2020-11-24 13:51:21): [be[somedomain.com]] [sdap_process_message] (0x4000): Message type: [LDAP_RES_SEARCH_REFERENCE]
(2020-11-24 13:51:21): [be[somedomain.com]] [sdap_get_generic_ext_add_references] (0x1000): Additional References: ldap://DomainDnsZones.somedomain.com/DC=DomainDnsZones,DC=ffusa,DC=com
(2020-11-24 13:51:21): [be[somedomain.com]] [sdap_process_result] (0x2000): Trace: sh[0x561688c76dd0], connected[1], ops[0x561688cad9a0], ldap[0x561688c9f400]
(2020-11-24 13:51:21): [be[somedomain.com]] [sdap_process_message] (0x4000): Message type: [LDAP_RES_SEARCH_REFERENCE]
(2020-11-24 13:51:21): [be[somedomain.com]] [sdap_get_generic_ext_add_references] (0x1000): Additional References: ldap://somedomain.com/CN=Configuration,DC=ffusa,DC=com
(2020-11-24 13:51:21): [be[somedomain.com]] [sdap_process_result] (0x2000): Trace: sh[0x561688c76dd0], connected[1], ops[0x561688cad9a0], ldap[0x561688c9f400]
(2020-11-24 13:51:21): [be[somedomain.com]] [sdap_process_message] (0x4000): Message type: [LDAP_RES_SEARCH_RESULT]
(2020-11-24 13:51:21): [be[somedomain.com]] [sdap_get_generic_op_finished] (0x0400): Search result: Success(0), no errmsg set
(2020-11-24 13:51:21): [be[somedomain.com]] [sdap_op_destructor] (0x2000): Operation 6 finished
(2020-11-24 13:51:21): [be[somedomain.com]] [generic_ext_search_handler] (0x4000): Request included referrals which were ignored.
(2020-11-24 13:51:21): [be[somedomain.com]] [generic_ext_search_handler] (0x4000): Ref: ldap://ForestDnsZones.somedomain.com/DC=ForestDnsZones,DC=ffusa,DC=com
(2020-11-24 13:51:21): [be[somedomain.com]] [generic_ext_search_handler] (0x4000): Ref: ldap://DomainDnsZones.somedomain.com/DC=DomainDnsZones,DC=ffusa,DC=com
(2020-11-24 13:51:21): [be[somedomain.com]] [generic_ext_search_handler] (0x4000): Ref: ldap://somedomain.com/CN=Configuration,DC=ffusa,DC=com
(2020-11-24 13:51:21): [be[somedomain.com]] [sdap_get_initgr_user] (0x4000): Receiving info for the user
(2020-11-24 13:51:21): [be[somedomain.com]] [sdap_get_initgr_user] (0x4000): Storing the user
(2020-11-24 13:51:21): [be[somedomain.com]] [sdap_save_user] (0x0400): Save user
(2020-11-24 13:51:21): [be[somedomain.com]] [sss_domain_get_state] (0x1000): Domain somedomain.com is Active
(2020-11-24 13:51:21): [be[somedomain.com]] [sdap_get_primary_name] (0x0400): Processing object tonyg
(2020-11-24 13:51:21): [be[somedomain.com]] [sdap_save_user] (0x0400): Processing user tonyg@somedomain.com
(2020-11-24 13:51:21): [be[somedomain.com]] [sdap_save_user] (0x1000): Mapping user [tonyg@somedomain.com] objectSID [S-1-5-21-4264107145-2280387099-501860720-1106] to unix ID
(2020-11-24 13:51:21): [be[somedomain.com]] [sdap_save_user] (0x2000): Adding originalDN [CN=Tony Guadagno,CN=Users,DC=ffusa,DC=com] to attributes of [tonyg@somedomain.com].
(2020-11-24 13:51:21): [be[somedomain.com]] [sdap_save_user] (0x0400): Adding original memberOf attributes to [tonyg@somedomain.com].
(2020-11-24 13:51:21): [be[somedomain.com]] [sdap_attrs_add_ldap_attr] (0x2000): Adding original mod-Timestamp [20201124154505.0Z] to attributes of [tonyg@somedomain.com].
(2020-11-24 13:51:21): [be[somedomain.com]] [sdap_save_user] (0x0400): Adding user principal [tonyg@somedomain.com] to attributes of [tonyg@somedomain.com].
(2020-11-24 13:51:21): [be[somedomain.com]] [sdap_attrs_add_ldap_attr] (0x2000): shadowLastChange is not available for [tonyg@somedomain.com].
(2020-11-24 13:51:21): [be[somedomain.com]] [sdap_attrs_add_ldap_attr] (0x2000): shadowMin is not available for [tonyg@somedomain.com].
(2020-11-24 13:51:21): [be[somedomain.com]] [sdap_attrs_add_ldap_attr] (0x2000): shadowMax is not available for [tonyg@somedomain.com].
(2020-11-24 13:51:21): [be[somedomain.com]] [sdap_attrs_add_ldap_attr] (0x2000): shadowWarning is not available for [tonyg@somedomain.com].
(2020-11-24 13:51:21): [be[somedomain.com]] [sdap_attrs_add_ldap_attr] (0x2000): shadowInactive is not available for [tonyg@somedomain.com].
(2020-11-24 13:51:21): [be[somedomain.com]] [sdap_attrs_add_ldap_attr] (0x2000): shadowExpire is not available for [tonyg@somedomain.com].
(2020-11-24 13:51:21): [be[somedomain.com]] [sdap_attrs_add_ldap_attr] (0x2000): shadowFlag is not available for [tonyg@somedomain.com].
(2020-11-24 13:51:21): [be[somedomain.com]] [sdap_attrs_add_ldap_attr] (0x2000): krbLastPwdChange is not available for [tonyg@somedomain.com].
(2020-11-24 13:51:21): [be[somedomain.com]] [sdap_attrs_add_ldap_attr] (0x2000): krbPasswordExpiration is not available for [tonyg@somedomain.com].
(2020-11-24 13:51:21): [be[somedomain.com]] [sdap_attrs_add_ldap_attr] (0x2000): pwdAttribute is not available for [tonyg@somedomain.com].
(2020-11-24 13:51:21): [be[somedomain.com]] [sdap_attrs_add_ldap_attr] (0x2000): authorizedService is not available for [tonyg@somedomain.com].
(2020-11-24 13:51:21): [be[somedomain.com]] [sdap_attrs_add_ldap_attr] (0x2000): Adding adAccountExpires [9223372036854775807] to attributes of [tonyg@somedomain.com].
(2020-11-24 13:51:21): [be[somedomain.com]] [sdap_attrs_add_ldap_attr] (0x2000): Adding adUserAccountControl [4260352] to attributes of [tonyg@somedomain.com].
(2020-11-24 13:51:21): [be[somedomain.com]] [sdap_attrs_add_ldap_attr] (0x2000): nsAccountLock is not available for [tonyg@somedomain.com].
(2020-11-24 13:51:21): [be[somedomain.com]] [sdap_attrs_add_ldap_attr] (0x2000): authorizedHost is not available for [tonyg@somedomain.com].
(2020-11-24 13:51:21): [be[somedomain.com]] [sdap_attrs_add_ldap_attr] (0x2000): authorizedRHost is not available for [tonyg@somedomain.com].
(2020-11-24 13:51:21): [be[somedomain.com]] [sdap_attrs_add_ldap_attr] (0x2000): ndsLoginDisabled is not available for [tonyg@somedomain.com].
(2020-11-24 13:51:21): [be[somedomain.com]] [sdap_attrs_add_ldap_attr] (0x2000): ndsLoginExpirationTime is not available for [tonyg@somedomain.com].
(2020-11-24 13:51:21): [be[somedomain.com]] [sdap_attrs_add_ldap_attr] (0x2000): ndsLoginAllowedTimeMap is not available for [tonyg@somedomain.com].
(2020-11-24 13:51:21): [be[somedomain.com]] [sdap_attrs_add_ldap_attr] (0x2000): sshPublicKey is not available for [tonyg@somedomain.com].
(2020-11-24 13:51:21): [be[somedomain.com]] [sdap_attrs_add_ldap_attr] (0x2000): authType is not available for [tonyg@somedomain.com].
(2020-11-24 13:51:21): [be[somedomain.com]] [sdap_attrs_add_ldap_attr] (0x2000): userCertificate is not available for [tonyg@somedomain.com].
(2020-11-24 13:51:21): [be[somedomain.com]] [sdap_attrs_add_ldap_attr] (0x2000): mail is not available for [tonyg@somedomain.com].
(2020-11-24 13:51:21): [be[somedomain.com]] [sysdb_attrs_get_aliases] (0x2000): Domain is case-insensitive; will add lowercased aliases
(2020-11-24 13:51:21): [be[somedomain.com]] [sdap_save_user] (0x0400): Storing info for user tonyg@somedomain.com
(2020-11-24 13:51:21): [be[somedomain.com]] [merge_msg_ts_attrs] (0x2000): No such DN in the timestamp cache: name=tonyg@somedomain.com,cn=users,cn=somedomain.com,cn=sysdb
(2020-11-24 13:51:21): [be[somedomain.com]] [sysdb_merge_res_ts_attrs] (0x2000): TS cache doesn't contain this DN, skipping
(2020-11-24 13:51:21): [be[somedomain.com]] [merge_msg_ts_attrs] (0x2000): No such DN in the timestamp cache: name=tonyg@somedomain.com,cn=users,cn=somedomain.com,cn=sysdb
(2020-11-24 13:51:21): [be[somedomain.com]] [sysdb_merge_res_ts_attrs] (0x2000): TS cache doesn't contain this DN, skipping
(2020-11-24 13:51:21): [be[somedomain.com]] [sysdb_set_cache_entry_attr] (0x0080): ldb_modify failed: [No such object](32)[ldb_wait from ldb_modify with LDB_WAIT_ALL: No such object (32)]
(2020-11-24 13:51:21): [be[somedomain.com]] [sysdb_set_cache_entry_attr] (0x0400): No such entry
(2020-11-24 13:51:21): [be[somedomain.com]] [sysdb_set_entry_attr] (0x0080): Cannot set ts attrs for name=tonyg@somedomain.com,cn=users,cn=somedomain.com,cn=sysdb
(2020-11-24 13:51:21): [be[somedomain.com]] [sysdb_remove_attrs] (0x2000): Removing attribute [userPassword] from [tonyg@somedomain.com]
(2020-11-24 13:51:21): [be[somedomain.com]] [sysdb_remove_attrs] (0x2000): Removing attribute [homeDirectory] from [tonyg@somedomain.com]
(2020-11-24 13:51:21): [be[somedomain.com]] [sysdb_remove_attrs] (0x2000): Removing attribute [loginShell] from [tonyg@somedomain.com]
(2020-11-24 13:51:21): [be[somedomain.com]] [sysdb_remove_attrs] (0x2000): Removing attribute [userCertificate] from [tonyg@somedomain.com]
(2020-11-24 13:51:21): [be[somedomain.com]] [sysdb_remove_attrs] (0x2000): Removing attribute [mail] from [tonyg@somedomain.com]
(2020-11-24 13:51:21): [be[somedomain.com]] [sdap_get_initgr_user] (0x4000): Commit change
(2020-11-24 13:51:21): [be[somedomain.com]] [merge_msg_ts_attrs] (0x2000): No such DN in the timestamp cache: name=tonyg@somedomain.com,cn=users,cn=somedomain.com,cn=sysdb
(2020-11-24 13:51:21): [be[somedomain.com]] [sysdb_merge_res_ts_attrs] (0x2000): TS cache doesn't contain this DN, skipping
(2020-11-24 13:51:21): [be[somedomain.com]] [sdap_get_initgr_user] (0x4000): Process user's groups
(2020-11-24 13:51:21): [be[somedomain.com]] [sdap_id_op_connect_step] (0x4000): reusing cached connection
(2020-11-24 13:51:21): [be[somedomain.com]] [sdap_print_server] (0x2000): Searching 172.19.4.40:389
(2020-11-24 13:51:21): [be[somedomain.com]] [sdap_get_generic_ext_step] (0x0400): calling ldap_search_ext with [no filter][CN=Tony Guadagno,CN=Users,DC=ffusa,DC=com].
(2020-11-24 13:51:21): [be[somedomain.com]] [sdap_get_generic_ext_step] (0x1000): Requesting attrs: [tokenGroups]
(2020-11-24 13:51:21): [be[somedomain.com]] [sdap_get_generic_ext_step] (0x2000): ldap_search_ext called, msgid = 7
(2020-11-24 13:51:21): [be[somedomain.com]] [sdap_op_add] (0x2000): New operation 7 timeout 6
(2020-11-24 13:51:21): [be[somedomain.com]] [sdap_process_result] (0x2000): Trace: sh[0x561688c76dd0], connected[1], ops[0x561688ca1600], ldap[0x561688c9f400]
(2020-11-24 13:51:21): [be[somedomain.com]] [sdap_process_result] (0x2000): Trace: end of ldap_result list
(2020-11-24 13:51:21): [be[somedomain.com]] [sdap_process_result] (0x2000): Trace: sh[0x561688c76dd0], connected[1], ops[0x561688ca1600], ldap[0x561688c9f400]
(2020-11-24 13:51:21): [be[somedomain.com]] [sdap_process_message] (0x4000): Message type: [LDAP_RES_SEARCH_ENTRY]
(2020-11-24 13:51:21): [be[somedomain.com]] [sdap_parse_entry] (0x1000): OriginalDN: [CN=Tony Guadagno,CN=Users,DC=ffusa,DC=com].
(2020-11-24 13:51:21): [be[somedomain.com]] [sdap_parse_range] (0x2000): No sub-attributes for [tokenGroups]
(2020-11-24 13:51:21): [be[somedomain.com]] [sdap_process_result] (0x2000): Trace: sh[0x561688c76dd0], connected[1], ops[0x561688ca1600], ldap[0x561688c9f400]
(2020-11-24 13:51:21): [be[somedomain.com]] [sdap_process_message] (0x4000): Message type: [LDAP_RES_SEARCH_RESULT]
(2020-11-24 13:51:21): [be[somedomain.com]] [sdap_get_generic_op_finished] (0x0400): Search result: Success(0), no errmsg set
(2020-11-24 13:51:21): [be[somedomain.com]] [sdap_op_destructor] (0x2000): Operation 7 finished
(2020-11-24 13:51:21): [be[somedomain.com]] [sdap_ad_save_group_membership_with_idmapping] (0x1000): Processing membership SID [S-1-5-32-551]
(2020-11-24 13:51:21): [be[somedomain.com]] [sdap_idmap_sid_to_unix] (0x0400): Object SID [S-1-5-32-551] is a built-in one.
(2020-11-24 13:51:21): [be[somedomain.com]] [sdap_ad_save_group_membership_with_idmapping] (0x0400): Skipping built-in object.
(2020-11-24 13:51:21): [be[somedomain.com]] [sdap_ad_save_group_membership_with_idmapping] (0x1000): Processing membership SID [S-1-5-32-544]
(2020-11-24 13:51:21): [be[somedomain.com]] [sdap_idmap_sid_to_unix] (0x0400): Object SID [S-1-5-32-544] is a built-in one.
(2020-11-24 13:51:21): [be[somedomain.com]] [sdap_ad_save_group_membership_with_idmapping] (0x0400): Skipping built-in object.
(2020-11-24 13:51:21): [be[somedomain.com]] [sdap_ad_save_group_membership_with_idmapping] (0x1000): Processing membership SID [S-1-5-32-545]
(2020-11-24 13:51:21): [be[somedomain.com]] [sdap_idmap_sid_to_unix] (0x0400): Object SID [S-1-5-32-545] is a built-in one.
(2020-11-24 13:51:21): [be[somedomain.com]] [sdap_ad_save_group_membership_with_idmapping] (0x0400): Skipping built-in object.
(2020-11-24 13:51:21): [be[somedomain.com]] [sdap_ad_save_group_membership_with_idmapping] (0x1000): Processing membership SID [S-1-5-21-4264107145-2280387099-501860720-572]
(2020-11-24 13:51:21): [be[somedomain.com]] [sss_domain_get_state] (0x1000): Domain somedomain.com is Active
(2020-11-24 13:51:21): [be[somedomain.com]] [sdap_ad_save_group_membership_with_idmapping] (0x1000): SID [S-1-5-21-4264107145-2280387099-501860720-572] maps to GID [1940200572]
(2020-11-24 13:51:21): [be[somedomain.com]] [merge_msg_ts_attrs] (0x2000): No such DN in the timestamp cache: name=Denied RODC Password Replication Group@somedomain.com,cn=groups,cn=somedomain.com,cn=sysdb
(2020-11-24 13:51:21): [be[somedomain.com]] [sysdb_merge_res_ts_attrs] (0x2000): TS cache doesn't contain this DN, skipping
(2020-11-24 13:51:21): [be[somedomain.com]] [sdap_ad_save_group_membership_with_idmapping] (0x1000): Processing membership SID [S-1-5-21-4264107145-2280387099-501860720-519]
(2020-11-24 13:51:21): [be[somedomain.com]] [sss_domain_get_state] (0x1000): Domain somedomain.com is Active
(2020-11-24 13:51:21): [be[somedomain.com]] [sdap_ad_save_group_membership_with_idmapping] (0x1000): SID [S-1-5-21-4264107145-2280387099-501860720-519] maps to GID [1940200519]
(2020-11-24 13:51:21): [be[somedomain.com]] [merge_msg_ts_attrs] (0x2000): No such DN in the timestamp cache: name=Enterprise Admins@somedomain.com,cn=groups,cn=somedomain.com,cn=sysdb
(2020-11-24 13:51:21): [be[somedomain.com]] [sysdb_merge_res_ts_attrs] (0x2000): TS cache doesn't contain this DN, skipping
(2020-11-24 13:51:21): [be[somedomain.com]] [sdap_ad_save_group_membership_with_idmapping] (0x1000): Processing membership SID [S-1-5-21-4264107145-2280387099-501860720-518]
(2020-11-24 13:51:21): [be[somedomain.com]] [sss_domain_get_state] (0x1000): Domain somedomain.com is Active
(2020-11-24 13:51:21): [be[somedomain.com]] [sdap_ad_save_group_membership_with_idmapping] (0x1000): SID [S-1-5-21-4264107145-2280387099-501860720-518] maps to GID [1940200518]
(2020-11-24 13:51:21): [be[somedomain.com]] [merge_msg_ts_attrs] (0x2000): No such DN in the timestamp cache: name=Schema Admins@somedomain.com,cn=groups,cn=somedomain.com,cn=sysdb
(2020-11-24 13:51:21): [be[somedomain.com]] [sysdb_merge_res_ts_attrs] (0x2000): TS cache doesn't contain this DN, skipping
(2020-11-24 13:51:21): [be[somedomain.com]] [sdap_ad_save_group_membership_with_idmapping] (0x1000): Processing membership SID [S-1-5-21-4264107145-2280387099-501860720-512]
(2020-11-24 13:51:21): [be[somedomain.com]] [sss_domain_get_state] (0x1000): Domain somedomain.com is Active
(2020-11-24 13:51:21): [be[somedomain.com]] [sdap_ad_save_group_membership_with_idmapping] (0x1000): SID [S-1-5-21-4264107145-2280387099-501860720-512] maps to GID [1940200512]
(2020-11-24 13:51:21): [be[somedomain.com]] [merge_msg_ts_attrs] (0x2000): No such DN in the timestamp cache: name=Domain Admins@somedomain.com,cn=groups,cn=somedomain.com,cn=sysdb
(2020-11-24 13:51:21): [be[somedomain.com]] [sysdb_merge_res_ts_attrs] (0x2000): TS cache doesn't contain this DN, skipping
(2020-11-24 13:51:21): [be[somedomain.com]] [sdap_ad_save_group_membership_with_idmapping] (0x1000): Processing membership SID [S-1-5-21-4264107145-2280387099-501860720-1218]
(2020-11-24 13:51:21): [be[somedomain.com]] [sss_domain_get_state] (0x1000): Domain somedomain.com is Active
(2020-11-24 13:51:21): [be[somedomain.com]] [sdap_ad_save_group_membership_with_idmapping] (0x1000): SID [S-1-5-21-4264107145-2280387099-501860720-1218] maps to GID [1940201218]
(2020-11-24 13:51:21): [be[somedomain.com]] [merge_msg_ts_attrs] (0x2000): No such DN in the timestamp cache: name=VCenter Admins@somedomain.com,cn=groups,cn=somedomain.com,cn=sysdb
(2020-11-24 13:51:21): [be[somedomain.com]] [sysdb_merge_res_ts_attrs] (0x2000): TS cache doesn't contain this DN, skipping
(2020-11-24 13:51:21): [be[somedomain.com]] [sdap_ad_save_group_membership_with_idmapping] (0x1000): Processing membership SID [S-1-5-21-4264107145-2280387099-501860720-1175]
(2020-11-24 13:51:21): [be[somedomain.com]] [sss_domain_get_state] (0x1000): Domain somedomain.com is Active
(2020-11-24 13:51:21): [be[somedomain.com]] [sdap_ad_save_group_membership_with_idmapping] (0x1000): SID [S-1-5-21-4264107145-2280387099-501860720-1175] maps to GID [1940201175]
(2020-11-24 13:51:21): [be[somedomain.com]] [merge_msg_ts_attrs] (0x2000): No such DN in the timestamp cache: name=WebTickets@somedomain.com,cn=groups,cn=somedomain.com,cn=sysdb
(2020-11-24 13:51:21): [be[somedomain.com]] [sysdb_merge_res_ts_attrs] (0x2000): TS cache doesn't contain this DN, skipping
(2020-11-24 13:51:21): [be[somedomain.com]] [sdap_ad_save_group_membership_with_idmapping] (0x1000): Processing membership SID [S-1-5-21-4264107145-2280387099-501860720-513]
(2020-11-24 13:51:21): [be[somedomain.com]] [sss_domain_get_state] (0x1000): Domain somedomain.com is Active
(2020-11-24 13:51:21): [be[somedomain.com]] [sdap_ad_save_group_membership_with_idmapping] (0x1000): SID [S-1-5-21-4264107145-2280387099-501860720-513] maps to GID [1940200513]
(2020-11-24 13:51:21): [be[somedomain.com]] [merge_msg_ts_attrs] (0x2000): No such DN in the timestamp cache: name=Domain Users@somedomain.com,cn=groups,cn=somedomain.com,cn=sysdb
(2020-11-24 13:51:21): [be[somedomain.com]] [sysdb_merge_res_ts_attrs] (0x2000): TS cache doesn't contain this DN, skipping
(2020-11-24 13:51:21): [be[somedomain.com]] [sdap_ad_save_group_membership_with_idmapping] (0x1000): Processing membership SID [S-1-5-21-4264107145-2280387099-501860720-1205]
(2020-11-24 13:51:21): [be[somedomain.com]] [sss_domain_get_state] (0x1000): Domain somedomain.com is Active
(2020-11-24 13:51:21): [be[somedomain.com]] [sdap_ad_save_group_membership_with_idmapping] (0x1000): SID [S-1-5-21-4264107145-2280387099-501860720-1205] maps to GID [1940201205]
(2020-11-24 13:51:21): [be[somedomain.com]] [merge_msg_ts_attrs] (0x2000): No such DN in the timestamp cache: name=TSSUsers@somedomain.com,cn=groups,cn=somedomain.com,cn=sysdb
(2020-11-24 13:51:21): [be[somedomain.com]] [sysdb_merge_res_ts_attrs] (0x2000): TS cache doesn't contain this DN, skipping
(2020-11-24 13:51:21): [be[somedomain.com]] [merge_msg_ts_attrs] (0x2000): No such DN in the timestamp cache: name=tonyg@somedomain.com,cn=users,cn=somedomain.com,cn=sysdb
(2020-11-24 13:51:21): [be[somedomain.com]] [sysdb_merge_res_ts_attrs] (0x2000): TS cache doesn't contain this DN, skipping
(2020-11-24 13:51:21): [be[somedomain.com]] [merge_msg_ts_attrs] (0x2000): No such DN in the timestamp cache: name=tonyg@somedomain.com,cn=users,cn=somedomain.com,cn=sysdb
(2020-11-24 13:51:21): [be[somedomain.com]] [sysdb_merge_res_ts_attrs] (0x2000): TS cache doesn't contain this DN, skipping
(2020-11-24 13:51:21): [be[somedomain.com]] [sdap_ad_tokengroups_update_members] (0x1000): Updating memberships for [tonyg@somedomain.com]
(2020-11-24 13:51:21): [be[somedomain.com]] [sdap_id_op_destroy] (0x4000): releasing operation connection
(2020-11-24 13:51:21): [be[somedomain.com]] [sdap_get_initgr_done] (0x4000): Initgroups done
(2020-11-24 13:51:21): [be[somedomain.com]] [sdap_get_initgr_done] (0x1000): Mapping primary group to unix ID
(2020-11-24 13:51:21): [be[somedomain.com]] [merge_msg_ts_attrs] (0x2000): No such DN in the timestamp cache: name=Domain Users@somedomain.com,cn=groups,cn=somedomain.com,cn=sysdb
(2020-11-24 13:51:21): [be[somedomain.com]] [sysdb_merge_res_ts_attrs] (0x2000): TS cache doesn't contain this DN, skipping
(2020-11-24 13:51:21): [be[somedomain.com]] [sdap_get_initgr_done] (0x0400): Primary group already cached, nothing to do.
(2020-11-24 13:51:21): [be[somedomain.com]] [sdap_get_initgr_done] (0x4000): No need to check for domain local group memberships.
(2020-11-24 13:51:21): [be[somedomain.com]] [sdap_id_op_destroy] (0x4000): releasing operation connection
(2020-11-24 13:51:21): [be[somedomain.com]] [sdap_id_op_done] (0x4000): releasing operation connection
(2020-11-24 13:51:21): [be[somedomain.com]] [dp_req_done] (0x0400): DP Request [Initgroups #47]: Request handler finished [0]: Success
(2020-11-24 13:51:21): [be[somedomain.com]] [_dp_req_recv] (0x0400): DP Request [Initgroups #47]: Receiving request data.
(2020-11-24 13:51:21): [be[somedomain.com]] [merge_msg_ts_attrs] (0x2000): No such DN in the timestamp cache: name=tonyg@somedomain.com,cn=users,cn=somedomain.com,cn=sysdb
(2020-11-24 13:51:21): [be[somedomain.com]] [sysdb_merge_res_ts_attrs] (0x2000): TS cache doesn't contain this DN, skipping
(2020-11-24 13:51:21): [be[somedomain.com]] [sysdb_set_cache_entry_attr] (0x0080): ldb_modify failed: [No such object](32)[ldb_wait from ldb_modify with LDB_WAIT_ALL: No such object (32)]
(2020-11-24 13:51:21): [be[somedomain.com]] [sysdb_set_cache_entry_attr] (0x0400): No such entry
(2020-11-24 13:51:21): [be[somedomain.com]] [sysdb_set_entry_attr] (0x0080): Cannot set ts attrs for name=tonyg@somedomain.com,cn=users,cn=somedomain.com,cn=sysdb
(2020-11-24 13:51:21): [be[somedomain.com]] [dp_req_initgr_pp_nss_notify] (0x0400): Ordering NSS responder to update memory cache
(2020-11-24 13:51:21): [be[somedomain.com]] [dp_req_reply_list_success] (0x0400): DP Request [Initgroups #47]: Finished. Success.
(2020-11-24 13:51:21): [be[somedomain.com]] [dp_req_reply_std] (0x1000): DP Request [Initgroups #47]: Returning [Success]: 0,0,Success
(2020-11-24 13:51:21): [be[somedomain.com]] [dp_table_value_destructor] (0x0400): Removing [0:1:0x0001:3::somedomain.com:name=tonyg@somedomain.com] from reply table
(2020-11-24 13:51:21): [be[somedomain.com]] [dp_req_destructor] (0x0400): DP Request [Initgroups #47]: Request removed.
(2020-11-24 13:51:21): [be[somedomain.com]] [dp_req_destructor] (0x0400): Number of active DP request: 0
(2020-11-24 13:51:21): [be[somedomain.com]] [sdap_process_result] (0x2000): Trace: sh[0x561688c76dd0], connected[1], ops[(nil)], ldap[0x561688c9f400]
(2020-11-24 13:51:21): [be[somedomain.com]] [sdap_process_result] (0x2000): Trace: end of ldap_result list
(2020-11-24 13:51:21): [be[somedomain.com]] [sbus_dispatch] (0x4000): dbus conn: 0x561688c8e7c0
(2020-11-24 13:51:21): [be[somedomain.com]] [sbus_dispatch] (0x4000): Dispatching.
(2020-11-24 13:51:21): [be[somedomain.com]] [sbus_dispatch] (0x4000): dbus conn: 0x561688c87cd0
(2020-11-24 13:51:21): [be[somedomain.com]] [sbus_dispatch] (0x4000): Dispatching.
(2020-11-24 13:51:21): [be[somedomain.com]] [sbus_message_handler] (0x2000): Received SBUS method org.freedesktop.sssd.dataprovider.pamHandler on path /org/freedesktop/sssd/dataprovider
(2020-11-24 13:51:21): [be[somedomain.com]] [sbus_get_sender_id_send] (0x2000): Not a sysbus message, quit
(2020-11-24 13:51:21): [be[somedomain.com]] [dp_pam_handler] (0x0100): Got request with the following data
(2020-11-24 13:51:21): [be[somedomain.com]] [pam_print_data] (0x0100): command: SSS_PAM_AUTHENTICATE
(2020-11-24 13:51:21): [be[somedomain.com]] [pam_print_data] (0x0100): domain: somedomain.com
(2020-11-24 13:51:21): [be[somedomain.com]] [pam_print_data] (0x0100): user: tonyg@somedomain.com
(2020-11-24 13:51:21): [be[somedomain.com]] [pam_print_data] (0x0100): service: sshd
(2020-11-24 13:51:21): [be[somedomain.com]] [pam_print_data] (0x0100): tty: ssh
(2020-11-24 13:51:21): [be[somedomain.com]] [pam_print_data] (0x0100): ruser:
(2020-11-24 13:51:21): [be[somedomain.com]] [pam_print_data] (0x0100): rhost: 192.168.199.12
(2020-11-24 13:51:21): [be[somedomain.com]] [pam_print_data] (0x0100): authtok type: 1
(2020-11-24 13:51:21): [be[somedomain.com]] [pam_print_data] (0x0100): newauthtok type: 0
(2020-11-24 13:51:21): [be[somedomain.com]] [pam_print_data] (0x0100): priv: 1
(2020-11-24 13:51:21): [be[somedomain.com]] [pam_print_data] (0x0100): cli_pid: 9594
(2020-11-24 13:51:21): [be[somedomain.com]] [pam_print_data] (0x0100): logon name: not set
(2020-11-24 13:51:21): [be[somedomain.com]] [dp_attach_req] (0x0400): DP Request [PAM Authenticate #48]: New request. Flags [0000].
(2020-11-24 13:51:21): [be[somedomain.com]] [dp_attach_req] (0x0400): Number of active DP request: 1
(2020-11-24 13:51:21): [be[somedomain.com]] [sss_domain_get_state] (0x1000): Domain somedomain.com is Active
(2020-11-24 13:51:21): [be[somedomain.com]] [krb5_auth_queue_send] (0x1000): Wait queue of user [tonyg@somedomain.com] is empty, running request [0x561688c9ca80] immediately.
(2020-11-24 13:51:21): [be[somedomain.com]] [krb5_setup] (0x4000): No mapping for: tonyg@somedomain.com
(2020-11-24 13:51:21): [be[somedomain.com]] [merge_msg_ts_attrs] (0x2000): No such DN in the timestamp cache: name=tonyg@somedomain.com,cn=users,cn=somedomain.com,cn=sysdb
(2020-11-24 13:51:21): [be[somedomain.com]] [sysdb_merge_res_ts_attrs] (0x2000): TS cache doesn't contain this DN, skipping
(2020-11-24 13:51:21): [be[somedomain.com]] [krb5_auth_send] (0x0100): Home directory for user [tonyg@somedomain.com] not known.
(2020-11-24 13:51:21): [be[somedomain.com]] [fo_resolve_service_send] (0x0100): Trying to resolve service 'AD'
(2020-11-24 13:51:21): [be[somedomain.com]] [get_server_status] (0x1000): Status of server 'dc-1.somedomain.com' is 'working'
(2020-11-24 13:51:21): [be[somedomain.com]] [get_port_status] (0x1000): Port status of port 389 for server 'dc-1.somedomain.com' is 'working'
(2020-11-24 13:51:21): [be[somedomain.com]] [fo_resolve_service_activate_timeout] (0x2000): Resolve timeout set to 6 seconds
(2020-11-24 13:51:21): [be[somedomain.com]] [resolve_srv_send] (0x0200): The status of SRV lookup is resolved
(2020-11-24 13:51:21): [be[somedomain.com]] [get_server_status] (0x1000): Status of server 'dc-1.somedomain.com' is 'working'
(2020-11-24 13:51:21): [be[somedomain.com]] [be_resolve_server_process] (0x1000): Saving the first resolved server
(2020-11-24 13:51:21): [be[somedomain.com]] [be_resolve_server_process] (0x0200): Found address for server dc-1.somedomain.com: [172.19.4.40] TTL 3600
(2020-11-24 13:51:21): [be[somedomain.com]] [ad_resolve_callback] (0x0100): Constructed uri 'ldap://dc-1.somedomain.com'
(2020-11-24 13:51:21): [be[somedomain.com]] [ad_resolve_callback] (0x0100): Constructed GC uri 'ldap://dc-1.somedomain.com'
(2020-11-24 13:51:21): [be[somedomain.com]] [krb5_add_krb5info_offline_callback] (0x4000): Removal callback already available for service [AD].
(2020-11-24 13:51:21): [be[somedomain.com]] [unique_filename_destructor] (0x2000): Unlinking [/var/lib/sss/pubconf/.krb5info_dummy_67aUWF]
(2020-11-24 13:51:21): [be[somedomain.com]] [unlink_dbg] (0x2000): File already removed: [/var/lib/sss/pubconf/.krb5info_dummy_67aUWF]
(2020-11-24 13:51:21): [be[somedomain.com]] [sss_domain_get_state] (0x1000): Domain somedomain.com is Active
(2020-11-24 13:51:21): [be[somedomain.com]] [child_handler_setup] (0x2000): Setting up signal handler up for pid [9595]
(2020-11-24 13:51:21): [be[somedomain.com]] [child_handler_setup] (0x2000): Signal handler set up for pid [9595]
(2020-11-24 13:51:21): [be[somedomain.com]] [write_pipe_handler] (0x0400): All data has been sent!
(2020-11-24 13:51:21): [be[somedomain.com]] [child_sig_handler] (0x1000): Waiting for child [9595].
(2020-11-24 13:51:21): [be[somedomain.com]] [child_sig_handler] (0x0100): child [9595] finished successfully.
(2020-11-24 13:51:21): [be[somedomain.com]] [read_pipe_handler] (0x0400): EOF received, client finished
(2020-11-24 13:51:21): [be[somedomain.com]] [parse_krb5_child_response] (0x1000): child response [1432158209][6][8].
(2020-11-24 13:51:21): [be[somedomain.com]] [krb5_auth_done] (0x0040): The krb5_child process returned an error. Please inspect the krb5_child.log file or the journal for more information
(2020-11-24 13:51:21): [be[somedomain.com]] [check_wait_queue] (0x1000): Wait queue for user [tonyg@somedomain.com] is empty.
(2020-11-24 13:51:21): [be[somedomain.com]] [krb5_auth_queue_done] (0x1000): krb5_auth_queue request [0x561688c9ca80] done.
(2020-11-24 13:51:21): [be[somedomain.com]] [dp_req_done] (0x0400): DP Request [PAM Authenticate #48]: Request handler finished [0]: Success
(2020-11-24 13:51:21): [be[somedomain.com]] [_dp_req_recv] (0x0400): DP Request [PAM Authenticate #48]: Receiving request data.
(2020-11-24 13:51:21): [be[somedomain.com]] [dp_req_destructor] (0x0400): DP Request [PAM Authenticate #48]: Request removed.
(2020-11-24 13:51:21): [be[somedomain.com]] [dp_req_destructor] (0x0400): Number of active DP request: 0
(2020-11-24 13:51:21): [be[somedomain.com]] [dp_method_enabled] (0x0400): Target selinux is not configured
(2020-11-24 13:51:21): [be[somedomain.com]] [dp_pam_reply] (0x1000): DP Request [PAM Authenticate #48]: Sending result [4][somedomain.com]
Code: Select all
(2020-11-24 13:51:21): [krb5_child[9595]] [main] (0x0400): krb5_child started.
(2020-11-24 13:51:21): [krb5_child[9595]] [unpack_buffer] (0x1000): total buffer size: [158]
(2020-11-24 13:51:21): [krb5_child[9595]] [unpack_buffer] (0x0100): cmd [241] uid [1940201106] gid [1940200513] validate [true] enterprise principal [true] offline [false] UPN [tonyg@somedomain.com]
(2020-11-24 13:51:21): [krb5_child[9595]] [unpack_buffer] (0x0100): ccname: [KEYRING:persistent:1940201106] old_ccname: [KEYRING:persistent:1940201106] keytab: [/etc/krb5.keytab]
(2020-11-24 13:51:21): [krb5_child[9595]] [check_use_fast] (0x0100): Not using FAST.
(2020-11-24 13:51:21): [krb5_child[9595]] [switch_creds] (0x0200): Switch user to [1940201106][1940200513].
(2020-11-24 13:51:21): [krb5_child[9595]] [sss_krb5_cc_verify_ccache] (0x2000): TGT not found or expired.
(2020-11-24 13:51:21): [krb5_child[9595]] [switch_creds] (0x0200): Switch user to [0][0].
(2020-11-24 13:51:21): [krb5_child[9595]] [k5c_check_old_ccache] (0x4000): Ccache_file is [KEYRING:persistent:1940201106] and is active and TGT is valid.
(2020-11-24 13:51:21): [krb5_child[9595]] [privileged_krb5_setup] (0x0080): Cannot open the PAC responder socket
(2020-11-24 13:51:21): [krb5_child[9595]] [become_user] (0x0200): Trying to become user [1940201106][1940200513].
(2020-11-24 13:51:21): [krb5_child[9595]] [main] (0x2000): Running as [1940201106][1940200513].
(2020-11-24 13:51:21): [krb5_child[9595]] [set_lifetime_options] (0x0100): No specific renewable lifetime requested.
(2020-11-24 13:51:21): [krb5_child[9595]] [set_lifetime_options] (0x0100): No specific lifetime requested.
(2020-11-24 13:51:21): [krb5_child[9595]] [set_canonicalize_option] (0x0100): Canonicalization is set to [true]
(2020-11-24 13:51:21): [krb5_child[9595]] [main] (0x0400): Will perform online auth
(2020-11-24 13:51:21): [krb5_child[9595]] [tgt_req_child] (0x1000): Attempting to get a TGT
(2020-11-24 13:51:21): [krb5_child[9595]] [get_and_save_tgt] (0x0400): Attempting kinit for realm [somedomain.com]
(2020-11-24 13:51:21): [krb5_child[9595]] [sss_child_krb5_trace_cb] (0x4000): [9595] 1606243881.798965: Getting initial credentials for tonyg\@somedomain.com@somedomain.com
(2020-11-24 13:51:21): [krb5_child[9595]] [sss_child_krb5_trace_cb] (0x4000): [9595] 1606243881.798967: Sending unauthenticated request
(2020-11-24 13:51:21): [krb5_child[9595]] [sss_child_krb5_trace_cb] (0x4000): [9595] 1606243881.798968: Sending request (196 bytes) to somedomain.com
(2020-11-24 13:51:21): [krb5_child[9595]] [sss_child_krb5_trace_cb] (0x4000): [9595] 1606243881.798969: Initiating TCP connection to stream 172.19.4.40:88
(2020-11-24 13:51:21): [krb5_child[9595]] [sss_child_krb5_trace_cb] (0x4000): [9595] 1606243881.798970: Sending TCP request to stream 172.19.4.40:88
(2020-11-24 13:51:21): [krb5_child[9595]] [sss_child_krb5_trace_cb] (0x4000): [9595] 1606243881.798971: Received answer (736 bytes) from stream 172.19.4.40:88
(2020-11-24 13:51:21): [krb5_child[9595]] [sss_child_krb5_trace_cb] (0x4000): [9595] 1606243881.798972: Terminating TCP connection to stream 172.19.4.40:88
(2020-11-24 13:51:21): [krb5_child[9595]] [sss_child_krb5_trace_cb] (0x4000): [9595] 1606243881.798973: Response was from master KDC
(2020-11-24 13:51:21): [krb5_child[9595]] [sss_child_krb5_trace_cb] (0x4000): [9595] 1606243881.798974: Processing preauth types: PA-ETYPE-INFO2 (19)
(2020-11-24 13:51:21): [krb5_child[9595]] [sss_child_krb5_trace_cb] (0x4000): [9595] 1606243881.798975: Selected etype info: etype aes256-cts, salt "somedomain.comtonyg", params ""
(2020-11-24 13:51:21): [krb5_child[9595]] [sss_child_krb5_trace_cb] (0x4000): [9595] 1606243881.798976: Produced preauth for next request: (empty)
(2020-11-24 13:51:21): [krb5_child[9595]] [sss_krb5_responder] (0x4000): Got question [password].
(2020-11-24 13:51:21): [krb5_child[9595]] [sss_child_krb5_trace_cb] (0x4000): [9595] 1606243881.798977: Getting AS key, salt "somedomain.comtonyg", params ""
(2020-11-24 13:51:21): [krb5_child[9595]] [sss_child_krb5_trace_cb] (0x4000): [9595] 1606243881.798978: AS key obtained from gak_fct: aes256-cts/19C1
(2020-11-24 13:51:21): [krb5_child[9595]] [sss_child_krb5_trace_cb] (0x4000): [9595] 1606243881.798979: Decrypted AS reply; session key is: aes256-cts/25F3
(2020-11-24 13:51:21): [krb5_child[9595]] [sss_child_krb5_trace_cb] (0x4000): [9595] 1606243881.798980: FAST negotiation: unavailable
(2020-11-24 13:51:21): [krb5_child[9595]] [sss_krb5_expire_callback_func] (0x2000): exp_time: [530265404]
(2020-11-24 13:51:21): [krb5_child[9595]] [validate_tgt] (0x2000): Found keytab entry with the realm of the credential.
(2020-11-24 13:51:21): [krb5_child[9595]] [sss_child_krb5_trace_cb] (0x4000): [9595] 1606243881.798981: Retrieving restrictedkrbhost/ffcplws.somedomain.com@somedomain.com from MEMORY:/etc/krb5.keytab (vno 0, enctype 0) with result: 0/Success
(2020-11-24 13:51:21): [krb5_child[9595]] [sss_child_krb5_trace_cb] (0x4000): [9595] 1606243881.798982: Resolving unique ccache of type MEMORY
(2020-11-24 13:51:21): [krb5_child[9595]] [sss_child_krb5_trace_cb] (0x4000): [9595] 1606243881.798983: Initializing MEMORY:nPr0P2A with default princ tonyg@somedomain.com
(2020-11-24 13:51:21): [krb5_child[9595]] [sss_child_krb5_trace_cb] (0x4000): [9595] 1606243881.798984: Storing tonyg@somedomain.com -> krbtgt/somedomain.com@somedomain.com in MEMORY:nPr0P2A
(2020-11-24 13:51:21): [krb5_child[9595]] [sss_child_krb5_trace_cb] (0x4000): [9595] 1606243881.798985: Getting credentials tonyg@somedomain.com -> restrictedkrbhost/ffcplws.somedomain.com@somedomain.com using ccache MEMORY:nPr0P2A
(2020-11-24 13:51:21): [krb5_child[9595]] [sss_child_krb5_trace_cb] (0x4000): [9595] 1606243881.798986: Retrieving tonyg@somedomain.com -> restrictedkrbhost/ffcplws.somedomain.com@somedomain.com from MEMORY:nPr0P2A with result: -1765328243/Matching credential not found
(2020-11-24 13:51:21): [krb5_child[9595]] [sss_child_krb5_trace_cb] (0x4000): [9595] 1606243881.798987: Retrieving tonyg@somedomain.com -> krbtgt/somedomain.com@somedomain.com from MEMORY:nPr0P2A with result: 0/Success
(2020-11-24 13:51:21): [krb5_child[9595]] [sss_child_krb5_trace_cb] (0x4000): [9595] 1606243881.798988: Starting with TGT for client realm: tonyg@somedomain.com -> krbtgt/somedomain.com@somedomain.com
(2020-11-24 13:51:21): [krb5_child[9595]] [sss_child_krb5_trace_cb] (0x4000): [9595] 1606243881.798989: Requesting tickets for restrictedkrbhost/ffcplws.somedomain.com@somedomain.com, referrals on
(2020-11-24 13:51:21): [krb5_child[9595]] [sss_child_krb5_trace_cb] (0x4000): [9595] 1606243881.798990: Generated subkey for TGS request: aes256-cts/0A19
(2020-11-24 13:51:21): [krb5_child[9595]] [sss_child_krb5_trace_cb] (0x4000): [9595] 1606243881.798991: etypes requested in TGS request: aes256-cts, aes128-cts, aes256-sha2, aes128-sha2, des3-cbc-sha1, rc4-hmac, camellia128-cts, camellia256-cts
(2020-11-24 13:51:21): [krb5_child[9595]] [sss_child_krb5_trace_cb] (0x4000): [9595] 1606243881.798993: Encoding request body and padata into FAST request
(2020-11-24 13:51:21): [krb5_child[9595]] [sss_child_krb5_trace_cb] (0x4000): [9595] 1606243881.798994: Sending request (924 bytes) to somedomain.com
(2020-11-24 13:51:21): [krb5_child[9595]] [sss_child_krb5_trace_cb] (0x4000): [9595] 1606243881.798995: Initiating TCP connection to stream 172.19.4.40:88
(2020-11-24 13:51:21): [krb5_child[9595]] [sss_child_krb5_trace_cb] (0x4000): [9595] 1606243881.798996: Sending TCP request to stream 172.19.4.40:88
(2020-11-24 13:51:21): [krb5_child[9595]] [sss_child_krb5_trace_cb] (0x4000): [9595] 1606243881.798997: Received answer (347 bytes) from stream 172.19.4.40:88
(2020-11-24 13:51:21): [krb5_child[9595]] [sss_child_krb5_trace_cb] (0x4000): [9595] 1606243881.798998: Terminating TCP connection to stream 172.19.4.40:88
(2020-11-24 13:51:21): [krb5_child[9595]] [sss_child_krb5_trace_cb] (0x4000): [9595] 1606243881.798999: Response was from master KDC
(2020-11-24 13:51:21): [krb5_child[9595]] [sss_child_krb5_trace_cb] (0x4000): [9595] 1606243881.799000: Decoding FAST response
(2020-11-24 13:51:21): [krb5_child[9595]] [sss_child_krb5_trace_cb] (0x4000): [9595] 1606243881.799001: TGS request result: -1765328324/Generic error (see e-text)
(2020-11-24 13:51:21): [krb5_child[9595]] [sss_child_krb5_trace_cb] (0x4000): [9595] 1606243881.799002: Requesting tickets for restrictedkrbhost/ffcplws.somedomain.com@somedomain.com, referrals off
(2020-11-24 13:51:21): [krb5_child[9595]] [sss_child_krb5_trace_cb] (0x4000): [9595] 1606243881.799003: Generated subkey for TGS request: aes256-cts/0AAB
(2020-11-24 13:51:21): [krb5_child[9595]] [sss_child_krb5_trace_cb] (0x4000): [9595] 1606243881.799004: etypes requested in TGS request: aes256-cts, aes128-cts, aes256-sha2, aes128-sha2, des3-cbc-sha1, rc4-hmac, camellia128-cts, camellia256-cts
(2020-11-24 13:51:21): [krb5_child[9595]] [sss_child_krb5_trace_cb] (0x4000): [9595] 1606243881.799006: Encoding request body and padata into FAST request
(2020-11-24 13:51:21): [krb5_child[9595]] [sss_child_krb5_trace_cb] (0x4000): [9595] 1606243881.799007: Sending request (924 bytes) to somedomain.com
(2020-11-24 13:51:21): [krb5_child[9595]] [sss_child_krb5_trace_cb] (0x4000): [9595] 1606243881.799008: Initiating TCP connection to stream 172.19.4.40:88
(2020-11-24 13:51:21): [krb5_child[9595]] [sss_child_krb5_trace_cb] (0x4000): [9595] 1606243881.799009: Sending TCP request to stream 172.19.4.40:88
(2020-11-24 13:51:21): [krb5_child[9595]] [sss_child_krb5_trace_cb] (0x4000): [9595] 1606243881.799010: Received answer (347 bytes) from stream 172.19.4.40:88
(2020-11-24 13:51:21): [krb5_child[9595]] [sss_child_krb5_trace_cb] (0x4000): [9595] 1606243881.799011: Terminating TCP connection to stream 172.19.4.40:88
(2020-11-24 13:51:21): [krb5_child[9595]] [sss_child_krb5_trace_cb] (0x4000): [9595] 1606243881.799012: Response was from master KDC
(2020-11-24 13:51:21): [krb5_child[9595]] [sss_child_krb5_trace_cb] (0x4000): [9595] 1606243881.799013: Decoding FAST response
(2020-11-24 13:51:21): [krb5_child[9595]] [sss_child_krb5_trace_cb] (0x4000): [9595] 1606243881.799014: TGS request result: -1765328324/Generic error (see e-text)
(2020-11-24 13:51:21): [krb5_child[9595]] [sss_child_krb5_trace_cb] (0x4000): [9595] 1606243881.799015: Destroying ccache MEMORY:nPr0P2A
(2020-11-24 13:51:21): [krb5_child[9595]] [validate_tgt] (0x0020): TGT failed verification using key for [restrictedkrbhost/ffcplws.somedomain.com@somedomain.com].
(2020-11-24 13:51:21): [krb5_child[9595]] [get_and_save_tgt] (0x0020): 1742: [-1765328324][Generic error (see e-text)]
(2020-11-24 13:51:21): [krb5_child[9595]] [map_krb5_error] (0x0020): 1834: [-1765328324][Generic error (see e-text)]
(2020-11-24 13:51:21): [krb5_child[9595]] [k5c_send_data] (0x0200): Received error code 1432158209
(2020-11-24 13:51:21): [krb5_child[9595]] [pack_response_packet] (0x2000): response packet size: [20]
(2020-11-24 13:51:21): [krb5_child[9595]] [k5c_send_data] (0x4000): Response sent.
(2020-11-24 13:51:21): [krb5_child[9595]] [main] (0x0400): krb5_child completed successfully
|
La clasificación de contenido analiza un documento y muestra una lista de categorías de contenido que se aplican al texto encontrado en el documento. Para clasificar el contenido en un documento, llama al método classifyText.
En esta sección, se muestra cómo clasificar el contenido de un documento.
Clasifica el contenido
El siguiente es un ejemplo de clasificación de contenido proporcionado como una string:
Protocolo
Para clasificar el contenido de un documento, realiza una solicitud POST al método documents:classifyText de REST y proporciona el cuerpo de la solicitud adecuado, como se muestra en el siguiente ejemplo.
En este ejemplo, se usa el comando gcloud auth application-default print-access-token a fin de obtener un token de acceso correspondiente a una cuenta de servicio configurada para el proyecto con el SDK de Cloud de Google Cloud Platform.Para obtener instrucciones sobre cómo instalar el SDK de Cloud y configurar un proyecto con una cuenta de servicio, consulta la Guía de inicio rápido.
curl -X POST \ -H "Authorization: Bearer "$(gcloud auth application-default print-access-token) \ -H "Content-Type: application/json; charset=utf-8" \ --data "{ 'document':{ 'type':'PLAIN_TEXT', 'content':'Google, headquartered in Mountain View, unveiled the new Android phone at the Consumer Electronic Show. Sundar Pichai said in his keynote that users love their new Android phones.' } }" "https://language.googleapis.com/v1/documents:classifyText"
C#
private static void ClassifyTextFromText(string text)
{
var client = LanguageServiceClient.Create();
var response = client.ClassifyText(new Document()
{
Content = text,
Type = Document.Types.Type.PlainText
});
WriteCategories(response.Categories);
}
private static void WriteCategories(IEnumerable<ClassificationCategory> categories)
{
Console.WriteLine("Categories:");
foreach (var category in categories)
{
Console.WriteLine($"\tCategory: {category.Name}");
Console.WriteLine($"\t\tConfidence: {category.Confidence}");
}
}
Comienza a usarlo
func classifyText(ctx context.Context, client *language.Client, text string) (*languagepb.ClassifyTextResponse, error) {
return client.ClassifyText(ctx, &languagepb.ClassifyTextRequest{
Document: &languagepb.Document{
Source: &languagepb.Document_Content{
Content: text,
},
Type: languagepb.Document_PLAIN_TEXT,
},
})
}
Java
// Instantiate the Language client com.google.cloud.language.v1.LanguageServiceClient
try (LanguageServiceClient language = LanguageServiceClient.create()) {
// set content to the text string
Document doc = Document.newBuilder().setContent(text).setType(Type.PLAIN_TEXT).build();
ClassifyTextRequest request = ClassifyTextRequest.newBuilder().setDocument(doc).build();
// detect categories in the given text
ClassifyTextResponse response = language.classifyText(request);
for (ClassificationCategory category : response.getCategoriesList()) {
System.out.printf(
"Category name : %s, Confidence : %.3f\n",
category.getName(), category.getConfidence());
}
}
Node.js
// Imports the Google Cloud client library
const language = require('@google-cloud/language');
// Creates a client
const client = new language.LanguageServiceClient();
/**
* TODO(developer): Uncomment the following line to run this code.
*/
// const text = 'Your text to analyze, e.g. Hello, world!';
// Prepares a document, representing the provided text
const document = {
content: text,
type: 'PLAIN_TEXT',
};
// Classifies text in the document
const [classification] = await client.classifyText({document});
console.log('Categories:');
classification.categories.forEach(category => {
console.log(`Name: ${category.name}, Confidence: ${category.confidence}`);
});
Python
from google.cloud import language_v1
def sample_classify_text(text_content):
"""
Classifying Content in a String
Args:
text_content The text content to analyze. Must include at least 20 words.
"""
client = language_v1.LanguageServiceClient()
# text_content = 'That actor on TV makes movies in Hollywood and also stars in a variety of popular new TV shows.'
# Available types: PLAIN_TEXT, HTML
type_ = language_v1.Document.Type.PLAIN_TEXT
# Optional. If not specified, the language is automatically detected.
# For list of supported languages:
# https://cloud.google.com/natural-language/docs/languages
language = "en"
document = {"content": text_content, "type_": type_, "language": language}
response = client.classify_text(request = {'document': document})
# Loop through classified categories returned from the API
for category in response.categories:
# Get the name of the category representing the document.
# See the predefined taxonomy of categories:
# https://cloud.google.com/natural-language/docs/categories
print(u"Category name: {}".format(category.name))
# Get the confidence. Number representing how certain the classifier
# is that this category represents the provided text.
print(u"Confidence: {}".format(category.confidence))
PHP
use Google\Cloud\Language\V1\Document;
use Google\Cloud\Language\V1\Document\Type;
use Google\Cloud\Language\V1\LanguageServiceClient;
/** Uncomment and populate these variables in your code */
// $text = 'The text to analyze.';
// Make sure we have enough words (20+) to call classifyText
if (str_word_count($text) < 20) {
printf('20+ words are required to classify text.' . PHP_EOL);
return;
}
$languageServiceClient = new LanguageServiceClient();
try {
// Create a new Document, add text as content and set type to PLAIN_TEXT
$document = (new Document())
->setContent($text)
->setType(Type::PLAIN_TEXT);
// Call the analyzeSentiment function
$response = $languageServiceClient->classifyText($document);
$categories = $response->getCategories();
// Print document information
foreach ($categories as $category) {
printf('Category Name: %s' . PHP_EOL, $category->getName());
printf('Confidence: %s' . PHP_EOL, $category->getConfidence());
print(PHP_EOL);
}
} finally {
$languageServiceClient->close();
}
Ruby
# text_content = "Text to classify"
require "google/cloud/language"
language = Google::Cloud::Language.language_service
document = { content: text_content, type: :PLAIN_TEXT }
response = language.classify_text document: document
categories = response.categories
categories.each do |category|
puts "Name: #{category.name} Confidence: #{category.confidence}"
end
Clasifica contenido de Google Cloud Storage
El siguiente es un ejemplo de cómo clasificar contenido almacenado en un archivo de texto en Google Cloud Storage:
Protocolo
Para clasificar el contenido de un documento almacenado en Google Cloud Storage, realiza una solicitud POST al método documents:classifyText de REST y proporciona el cuerpo de la solicitud adecuado con la ruta del documento, como se muestra en el siguiente ejemplo.
curl -X POST \ -H "Authorization: Bearer "$(gcloud auth application-default print-access-token) \ -H "Content-Type: application/json; charset=utf-8" \ --data "{ 'document':{ 'type':'PLAIN_TEXT', 'gcsContentUri':'gs://<bucket-name>/<object-name>' } }" "https://language.googleapis.com/v1/documents:classifyText"
C#
private static void ClassifyTextFromFile(string gcsUri)
{
var client = LanguageServiceClient.Create();
var response = client.ClassifyText(new Document()
{
GcsContentUri = gcsUri,
Type = Document.Types.Type.PlainText
});
WriteCategories(response.Categories);
}
private static void WriteCategories(IEnumerable<ClassificationCategory> categories)
{
Console.WriteLine("Categories:");
foreach (var category in categories)
{
Console.WriteLine($"\tCategory: {category.Name}");
Console.WriteLine($"\t\tConfidence: {category.Confidence}");
}
}
Comienza a usarlo
func classifyTextFromGCS(ctx context.Context, gcsURI string) (*languagepb.ClassifyTextResponse, error) {
return client.ClassifyText(ctx, &languagepb.ClassifyTextRequest{
Document: &languagepb.Document{
Source: &languagepb.Document_GcsContentUri{
GcsContentUri: gcsURI,
},
Type: languagepb.Document_PLAIN_TEXT,
},
})
}
Java
// Instantiate the Language client com.google.cloud.language.v1.LanguageServiceClient
try (LanguageServiceClient language = LanguageServiceClient.create()) {
// set the GCS content URI path
Document doc =
Document.newBuilder().setGcsContentUri(gcsUri).setType(Type.PLAIN_TEXT).build();
ClassifyTextRequest request = ClassifyTextRequest.newBuilder().setDocument(doc).build();
// detect categories in the given file
ClassifyTextResponse response = language.classifyText(request);
for (ClassificationCategory category : response.getCategoriesList()) {
System.out.printf(
"Category name : %s, Confidence : %.3f\n",
category.getName(), category.getConfidence());
}
}
Node.js
// Imports the Google Cloud client library.
const language = require('@google-cloud/language');
// Creates a client.
const client = new language.LanguageServiceClient();
/**
* TODO(developer): Uncomment the following lines to run this code
*/
// const bucketName = 'Your bucket name, e.g. my-bucket';
// const fileName = 'Your file name, e.g. my-file.txt';
// Prepares a document, representing a text file in Cloud Storage
const document = {
gcsContentUri: `gs://${bucketName}/${fileName}`,
type: 'PLAIN_TEXT',
};
// Classifies text in the document
const [classification] = await client.classifyText({document});
console.log('Categories:');
classification.categories.forEach(category => {
console.log(`Name: ${category.name}, Confidence: ${category.confidence}`);
});
Python
from google.cloud import language_v1
def sample_classify_text(gcs_content_uri):
"""
Classifying Content in text file stored in Cloud Storage
Args:
gcs_content_uri Google Cloud Storage URI where the file content is located.
e.g. gs://[Your Bucket]/[Path to File]
The text file must include at least 20 words.
"""
client = language_v1.LanguageServiceClient()
# gcs_content_uri = 'gs://cloud-samples-data/language/classify-entertainment.txt'
# Available types: PLAIN_TEXT, HTML
type_ = language_v1.Document.Type.PLAIN_TEXT
# Optional. If not specified, the language is automatically detected.
# For list of supported languages:
# https://cloud.google.com/natural-language/docs/languages
language = "en"
document = {"gcs_content_uri": gcs_content_uri, "type_": type_, "language": language}
response = client.classify_text(request = {'document': document})
# Loop through classified categories returned from the API
for category in response.categories:
# Get the name of the category representing the document.
# See the predefined taxonomy of categories:
# https://cloud.google.com/natural-language/docs/categories
print(u"Category name: {}".format(category.name))
# Get the confidence. Number representing how certain the classifier
# is that this category represents the provided text.
print(u"Confidence: {}".format(category.confidence))
PHP
use Google\Cloud\Language\V1\Document;
use Google\Cloud\Language\V1\Document\Type;
use Google\Cloud\Language\V1\LanguageServiceClient;
/** Uncomment and populate these variables in your code */
// $uri = 'The cloud storage object to analyze (gs://your-bucket-name/your-object-name)';
$languageServiceClient = new LanguageServiceClient();
try {
// Create a new Document, pass GCS URI and set type to PLAIN_TEXT
$document = (new Document())
->setGcsContentUri($uri)
->setType(Type::PLAIN_TEXT);
// Call the analyzeSentiment function
$response = $languageServiceClient->classifyText($document);
$categories = $response->getCategories();
// Print document information
foreach ($categories as $category) {
printf('Category Name: %s' . PHP_EOL, $category->getName());
printf('Confidence: %s' . PHP_EOL, $category->getConfidence());
print(PHP_EOL);
}
} finally {
$languageServiceClient->close();
}
Ruby
# storage_path = "Path to file in Google Cloud Storage, eg. gs://bucket/file"
require "google/cloud/language"
language = Google::Cloud::Language.language_service
document = { gcs_content_uri: storage_path, type: :PLAIN_TEXT }
response = language.classify_text document: document
categories = response.categories
categories.each do |category|
puts "Name: #{category.name} Confidence: #{category.confidence}"
end
|
FIrst time arch user here and I have trouble with setting my screen resolution for my primary screen. My second screen is a laptop screen with a working 1920x1080 resolution.
Problem:
I am trying to add a resolution of 1920x1080 for this primary screen as currently I am only offered 1600x1200 for this.
The resolution was recognized on a previous laptop running 10.04LTS with a KVM switch in between. I removed the KVM switch for now just to be sure that the switch is not causing the problem
I executed:
xrandr --newmode "1920x1200_60.00" 193.16 1920 2048 2256 2592 1200 1201 1204 1242 -HSync +Vsyncxrandr --addmode VGA1 1920x1200_60.00xrandr --output VGA1 --mode 1920x1200_60.00
After that last command I receive:
xrandr: Configure crtc 0 failed
I guess my monitor is not sending the appropriate EDID but I am not sure what's the best way to continue now.
I have read about manually creating a /etc/X11/xorg.conf.d/10-monitor.conf file and also forcing modes https://wiki.archlinux.org/index.php/ke … s_and_EDID but until I had no luck with this.
My screen:
Nec MultiSync EA241WM
Output of xrandr:
Screen 0: minimum 8 x 8, current 3600 x 1080, maximum 32767 x 32767
eDP1 connected 1920x1080+0+0 (normal left inverted right x axis y axis) 344mm x 193mm
1920x1080 60.21*+ 40.14
1400x1050 59.98
1280x1024 60.02
1280x960 60.00
1024x768 60.00
800x600 60.32 56.25
640x480 59.94
VGA1 connected primary 1680x1050+1920+0 (normal left inverted right x axis y axis) 518mm x 324mm
1600x1200 60.00
1680x1050 59.95*
1400x1050 59.98
1280x1024 75.02 60.02
1440x900 59.89
1360x768 59.80
1152x864 75.00
1280x720 59.97
1024x768 75.08 70.07 60.00
832x624 74.55
800x600 72.19 75.00 60.32 56.25
640x480 75.00 72.81 66.67 60.00
720x400 70.08
1920x1200_60.00 60.00
HDMI1 disconnected (normal left inverted right x axis y axis)
VIRTUAL1 disconnected (normal left inverted right x axis y axis)
Output /var/log/Xorg.0.log:
[ 2402.183]
X.Org X Server 1.16.0
Release Date: 2014-07-16
[ 2402.186] X Protocol Version 11, Revision 0
[ 2402.187] Build Operating System: Linux 3.15.5-2-ARCH x86_64
[ 2402.189] Current Operating System: Linux 3.15.8-1-ARCH #1 SMP PREEMPT Fri Aug 1 08:51:42 CEST 2014 x86_64
[ 2402.189] Kernel command line: initrd=\initramfs-linux.img root=/dev/sda2 rw
[ 2402.190] Build Date: 31 July 2014 11:53:19AM
[ 2402.191]
[ 2402.192] Current version of pixman: 0.32.6
[ 2402.193] Before reporting problems, check http://wiki.x.org
to make sure that you have the latest version.
[ 2402.193] Markers: (--) probed, (**) from config file, (==) default setting,
(++) from command line, (!!) notice, (II) informational,
(WW) warning, (EE) error, (NI) not implemented, (??) unknown.
[ 2402.196] (==) Log file: "/var/log/Xorg.0.log", Time: Sun Aug 10 22:48:25 2014
[ 2402.197] (==) Using config directory: "/etc/X11/xorg.conf.d"
[ 2402.198] (==) Using system config directory "/usr/share/X11/xorg.conf.d"
[ 2402.198] (==) No Layout section. Using the first Screen section.
[ 2402.198] (==) No screen section available. Using defaults.
[ 2402.198] (**) |-->Screen "Default Screen Section" (0)
[ 2402.198] (**) | |-->Monitor "<default monitor>"
[ 2402.198] (==) No monitor specified for screen "Default Screen Section".
Using a default monitor configuration.
[ 2402.198] (==) Automatically adding devices
[ 2402.198] (==) Automatically enabling devices
[ 2402.198] (==) Automatically adding GPU devices
[ 2402.198] (WW) The directory "/usr/share/fonts/TTF/" does not exist.
[ 2402.198] Entry deleted from font path.
[ 2402.198] (WW) The directory "/usr/share/fonts/OTF/" does not exist.
[ 2402.198] Entry deleted from font path.
[ 2402.198] (WW) The directory "/usr/share/fonts/Type1/" does not exist.
[ 2402.198] Entry deleted from font path.
[ 2402.198] (WW) `fonts.dir' not found (or not valid) in "/usr/share/fonts/100dpi/".
[ 2402.198] Entry deleted from font path.
[ 2402.198] (Run 'mkfontdir' on "/usr/share/fonts/100dpi/").
[ 2402.198] (WW) `fonts.dir' not found (or not valid) in "/usr/share/fonts/75dpi/".
[ 2402.198] Entry deleted from font path.
[ 2402.198] (Run 'mkfontdir' on "/usr/share/fonts/75dpi/").
[ 2402.198] (==) FontPath set to:
/usr/share/fonts/misc/
[ 2402.198] (==) ModulePath set to "/usr/lib/xorg/modules"
[ 2402.198] (II) The server relies on udev to provide the list of input devices.
If no devices become available, reconfigure udev or disable AutoAddDevices.
[ 2402.198] (II) Loader magic: 0x818d80
[ 2402.198] (II) Module ABI versions:
[ 2402.198] X.Org ANSI C Emulation: 0.4
[ 2402.198] X.Org Video Driver: 18.0
[ 2402.198] X.Org XInput driver : 21.0
[ 2402.198] X.Org Server Extension : 8.0
[ 2402.200] (II) systemd-logind: took control of session /org/freedesktop/login1/session/c4
[ 2402.200] (II) xfree86: Adding drm device (/dev/dri/card0)
[ 2402.200] (II) systemd-logind: got fd for /dev/dri/card0 226:0 fd 8 paused 0
[ 2402.201] (--) PCI:*(0:0:2:0) 8086:0416:1558:5455 rev 6, Mem @ 0xf7800000/4194304, 0xe0000000/268435456, I/O @ 0x0000f000/64
[ 2402.201] (WW) Open ACPI failed (/var/run/acpid.socket) (No such file or directory)
[ 2402.201] (II) LoadModule: "glx"
[ 2402.201] (II) Loading /usr/lib/xorg/modules/extensions/libglx.so
[ 2402.202] (II) Module glx: vendor="X.Org Foundation"
[ 2402.202] compiled for 1.16.0, module version = 1.0.0
[ 2402.202] ABI class: X.Org Server Extension, version 8.0
[ 2402.202] (==) AIGLX enabled
[ 2402.202] (==) Matched intel as autoconfigured driver 0
[ 2402.202] (==) Matched intel as autoconfigured driver 1
[ 2402.202] (==) Matched modesetting as autoconfigured driver 2
[ 2402.202] (==) Matched fbdev as autoconfigured driver 3
[ 2402.202] (==) Matched vesa as autoconfigured driver 4
[ 2402.202] (==) Assigned the driver to the xf86ConfigLayout
[ 2402.202] (II) LoadModule: "intel"
[ 2402.202] (II) Loading /usr/lib/xorg/modules/drivers/intel_drv.so
[ 2402.202] (II) Module intel: vendor="X.Org Foundation"
[ 2402.202] compiled for 1.16.0, module version = 2.99.914
[ 2402.202] Module class: X.Org Video Driver
[ 2402.202] ABI class: X.Org Video Driver, version 18.0
[ 2402.202] (II) LoadModule: "modesetting"
[ 2402.202] (WW) Warning, couldn't open module modesetting
[ 2402.202] (II) UnloadModule: "modesetting"
[ 2402.202] (II) Unloading modesetting
[ 2402.202] (EE) Failed to load module "modesetting" (module does not exist, 0)
[ 2402.202] (II) LoadModule: "fbdev"
[ 2402.202] (WW) Warning, couldn't open module fbdev
[ 2402.202] (II) UnloadModule: "fbdev"
[ 2402.202] (II) Unloading fbdev
[ 2402.202] (EE) Failed to load module "fbdev" (module does not exist, 0)
[ 2402.202] (II) LoadModule: "vesa"
[ 2402.202] (WW) Warning, couldn't open module vesa
[ 2402.202] (II) UnloadModule: "vesa"
[ 2402.202] (II) Unloading vesa
[ 2402.202] (EE) Failed to load module "vesa" (module does not exist, 0)
[ 2402.202] (II) intel: Driver for Intel(R) Integrated Graphics Chipsets:
i810, i810-dc100, i810e, i815, i830M, 845G, 854, 852GM/855GM, 865G,
915G, E7221 (i915), 915GM, 945G, 945GM, 945GME, Pineview GM,
Pineview G, 965G, G35, 965Q, 946GZ, 965GM, 965GME/GLE, G33, Q35, Q33,
GM45, 4 Series, G45/G43, Q45/Q43, G41, B43
[ 2402.202] (II) intel: Driver for Intel(R) HD Graphics: 2000-6000
[ 2402.202] (II) intel: Driver for Intel(R) Iris(TM) Graphics: 5100, 6100
[ 2402.202] (II) intel: Driver for Intel(R) Iris(TM) Pro Graphics: 5200, 6200, P6300
[ 2402.202] (++) using VT number 1
[ 2402.202] (--) controlling tty is VT number 1, auto-enabling KeepTty
[ 2402.202] (II) intel(0): Using Kernel Mode Setting driver: i915, version 1.6.0 20080730
[ 2402.203] (--) intel(0): Integrated Graphics Chipset: Intel(R) HD Graphics 4600
[ 2402.203] (--) intel(0): CPU: x86-64, sse2, sse3, ssse3, sse4.1, sse4.2, avx, avx2
[ 2402.203] (II) intel(0): Creating default Display subsection in Screen section
"Default Screen Section" for depth/fbbpp 24/32
[ 2402.203] (==) intel(0): Depth 24, (--) framebuffer bpp 32
[ 2402.203] (==) intel(0): RGB weight 888
[ 2402.203] (==) intel(0): Default visual is TrueColor
[ 2402.203] (II) intel(0): Output eDP1 has no monitor section
[ 2402.203] (--) intel(0): Found backlight control interface acpi_video0 (type 'firmware') for output eDP1
[ 2402.203] (II) intel(0): Output VGA1 has no monitor section
[ 2402.203] (II) intel(0): Output HDMI1 has no monitor section
[ 2402.203] (--) intel(0): Using a maximum size of 256x256 for hardware cursors
[ 2402.203] (II) intel(0): Output VIRTUAL1 has no monitor section
[ 2402.203] (--) intel(0): Output eDP1 using initial mode 1920x1080 on pipe 0
[ 2402.203] (==) intel(0): TearFree disabled
[ 2402.203] (==) intel(0): DPI set to (96, 96)
[ 2402.203] (II) Loading sub module "dri2"
[ 2402.203] (II) LoadModule: "dri2"
[ 2402.203] (II) Module "dri2" already built-in
[ 2402.203] (II) Loading sub module "present"
[ 2402.203] (II) LoadModule: "present"
[ 2402.203] (II) Module "present" already built-in
[ 2402.203] (==) Depth 24 pixmap format is 32 bpp
[ 2402.203] (II) intel(0): SNA initialized with Haswell (gen7.5, gt2) backend
[ 2402.203] (==) intel(0): Backing store enabled
[ 2402.203] (==) intel(0): Silken mouse enabled
[ 2402.203] (II) intel(0): HW Cursor enabled
[ 2402.203] (II) intel(0): RandR 1.2 enabled, ignore the following RandR disabled message.
[ 2402.203] (==) intel(0): DPMS enabled
[ 2402.203] (II) intel(0): [DRI2] Setup complete
[ 2402.203] (II) intel(0): [DRI2] DRI driver: i965
[ 2402.203] (II) intel(0): [DRI2] VDPAU driver: i965
[ 2402.203] (II) intel(0): direct rendering: DRI2 enabled
[ 2402.203] (II) intel(0): hardware support for Present enabled
[ 2402.203] (==) intel(0): display hotplug detection enabled
[ 2402.203] (--) RandR disabled
[ 2402.219] (II) AIGLX: enabled GLX_MESA_copy_sub_buffer
[ 2402.219] (II) AIGLX: enabled GLX_ARB_create_context
[ 2402.219] (II) AIGLX: enabled GLX_ARB_create_context_profile
[ 2402.219] (II) AIGLX: enabled GLX_EXT_create_context_es2_profile
[ 2402.219] (II) AIGLX: enabled GLX_INTEL_swap_event
[ 2402.219] (II) AIGLX: enabled GLX_SGI_swap_control and GLX_MESA_swap_control
[ 2402.219] (II) AIGLX: enabled GLX_EXT_framebuffer_sRGB
[ 2402.219] (II) AIGLX: enabled GLX_ARB_fbconfig_float
[ 2402.219] (II) AIGLX: GLX_EXT_texture_from_pixmap backed by buffer objects
[ 2402.219] (II) AIGLX: enabled GLX_ARB_create_context_robustness
[ 2402.219] (II) AIGLX: Loaded and initialized i965
[ 2402.219] (II) GLX: Initialized DRI2 GL provider for screen 0
[ 2402.220] (II) intel(0): switch to mode 1920x1080@60.2 on eDP1 using pipe 0, position (0, 0), rotation normal, reflection none
[ 2402.233] (II) intel(0): Setting screen physical size to 508 x 285
[ 2402.251] (II) config/udev: Adding input device Power Button (/dev/input/event4)
[ 2402.251] (**) Power Button: Applying InputClass "evdev keyboard catchall"
[ 2402.251] (II) LoadModule: "evdev"
[ 2402.251] (II) Loading /usr/lib/xorg/modules/input/evdev_drv.so
[ 2402.251] (II) Module evdev: vendor="X.Org Foundation"
[ 2402.251] compiled for 1.16.0, module version = 2.9.0
[ 2402.251] Module class: X.Org XInput Driver
[ 2402.251] ABI class: X.Org XInput driver, version 21.0
[ 2402.252] (II) systemd-logind: got fd for /dev/input/event4 13:68 fd 16 paused 0
[ 2402.252] (II) Using input driver 'evdev' for 'Power Button'
[ 2402.252] (**) Power Button: always reports core events
[ 2402.252] (**) evdev: Power Button: Device: "/dev/input/event4"
[ 2402.252] (--) evdev: Power Button: Vendor 0 Product 0x1
[ 2402.252] (--) evdev: Power Button: Found keys
[ 2402.252] (II) evdev: Power Button: Configuring as keyboard
[ 2402.252] (**) Option "config_info" "udev:/sys/devices/LNXSYSTM:00/LNXPWRBN:00/input/input8/event4"
[ 2402.252] (II) XINPUT: Adding extended input device "Power Button" (type: KEYBOARD, id 6)
[ 2402.252] (**) Option "xkb_rules" "evdev"
[ 2402.252] (**) Option "xkb_model" "pc104"
[ 2402.252] (**) Option "xkb_layout" "us"
[ 2402.265] (II) config/udev: Adding input device Video Bus (/dev/input/event11)
[ 2402.265] (**) Video Bus: Applying InputClass "evdev keyboard catchall"
[ 2402.266] (II) systemd-logind: got fd for /dev/input/event11 13:75 fd 17 paused 0
[ 2402.266] (II) Using input driver 'evdev' for 'Video Bus'
[ 2402.266] (**) Video Bus: always reports core events
[ 2402.266] (**) evdev: Video Bus: Device: "/dev/input/event11"
[ 2402.266] (--) evdev: Video Bus: Vendor 0 Product 0x6
[ 2402.266] (--) evdev: Video Bus: Found keys
[ 2402.266] (II) evdev: Video Bus: Configuring as keyboard
[ 2402.266] (**) Option "config_info" "udev:/sys/devices/LNXSYSTM:00/LNXSYBUS:00/PNP0A08:00/LNXVIDEO:00/input/input16/event11"
[ 2402.266] (II) XINPUT: Adding extended input device "Video Bus" (type: KEYBOARD, id 7)
[ 2402.266] (**) Option "xkb_rules" "evdev"
[ 2402.266] (**) Option "xkb_model" "pc104"
[ 2402.266] (**) Option "xkb_layout" "us"
[ 2402.266] (II) config/udev: Adding input device Power Button (/dev/input/event1)
[ 2402.266] (**) Power Button: Applying InputClass "evdev keyboard catchall"
[ 2402.266] (II) systemd-logind: got fd for /dev/input/event1 13:65 fd 18 paused 0
[ 2402.266] (II) Using input driver 'evdev' for 'Power Button'
[ 2402.266] (**) Power Button: always reports core events
[ 2402.266] (**) evdev: Power Button: Device: "/dev/input/event1"
[ 2402.266] (--) evdev: Power Button: Vendor 0 Product 0x1
[ 2402.266] (--) evdev: Power Button: Found keys
[ 2402.266] (II) evdev: Power Button: Configuring as keyboard
[ 2402.266] (**) Option "config_info" "udev:/sys/devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input5/event1"
[ 2402.266] (II) XINPUT: Adding extended input device "Power Button" (type: KEYBOARD, id 8)
[ 2402.266] (**) Option "xkb_rules" "evdev"
[ 2402.266] (**) Option "xkb_model" "pc104"
[ 2402.266] (**) Option "xkb_layout" "us"
[ 2402.267] (II) config/udev: Adding input device Lid Switch (/dev/input/event3)
[ 2402.267] (II) No input driver specified, ignoring this device.
[ 2402.267] (II) This device may have been added with another device file.
[ 2402.267] (II) config/udev: Adding input device Sleep Button (/dev/input/event2)
[ 2402.267] (**) Sleep Button: Applying InputClass "evdev keyboard catchall"
[ 2402.267] (II) systemd-logind: got fd for /dev/input/event2 13:66 fd 19 paused 0
[ 2402.267] (II) Using input driver 'evdev' for 'Sleep Button'
[ 2402.267] (**) Sleep Button: always reports core events
[ 2402.267] (**) evdev: Sleep Button: Device: "/dev/input/event2"
[ 2402.267] (--) evdev: Sleep Button: Vendor 0 Product 0x3
[ 2402.267] (--) evdev: Sleep Button: Found keys
[ 2402.267] (II) evdev: Sleep Button: Configuring as keyboard
[ 2402.267] (**) Option "config_info" "udev:/sys/devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input6/event2"
[ 2402.267] (II) XINPUT: Adding extended input device "Sleep Button" (type: KEYBOARD, id 9)
[ 2402.267] (**) Option "xkb_rules" "evdev"
[ 2402.267] (**) Option "xkb_model" "pc104"
[ 2402.267] (**) Option "xkb_layout" "us"
[ 2402.268] (II) config/udev: Adding input device HDA Intel HDMI HDMI/DP,pcm=3 (/dev/input/event12)
[ 2402.268] (II) No input driver specified, ignoring this device.
[ 2402.268] (II) This device may have been added with another device file.
[ 2402.268] (II) config/udev: Adding input device HDA Intel HDMI HDMI/DP,pcm=7 (/dev/input/event13)
[ 2402.268] (II) No input driver specified, ignoring this device.
[ 2402.268] (II) This device may have been added with another device file.
[ 2402.268] (II) config/udev: Adding input device HDA Intel HDMI HDMI/DP,pcm=8 (/dev/input/event14)
[ 2402.268] (II) No input driver specified, ignoring this device.
[ 2402.268] (II) This device may have been added with another device file.
[ 2402.268] (II) config/udev: Adding input device Chicony USB 2.0 Camera (/dev/input/event9)
[ 2402.268] (**) Chicony USB 2.0 Camera: Applying InputClass "evdev keyboard catchall"
[ 2402.268] (II) systemd-logind: got fd for /dev/input/event9 13:73 fd 20 paused 0
[ 2402.268] (II) Using input driver 'evdev' for 'Chicony USB 2.0 Camera'
[ 2402.268] (**) Chicony USB 2.0 Camera: always reports core events
[ 2402.268] (**) evdev: Chicony USB 2.0 Camera: Device: "/dev/input/event9"
[ 2402.268] (--) evdev: Chicony USB 2.0 Camera: Vendor 0x4f2 Product 0xb35a
[ 2402.268] (--) evdev: Chicony USB 2.0 Camera: Found keys
[ 2402.268] (II) evdev: Chicony USB 2.0 Camera: Configuring as keyboard
[ 2402.268] (**) Option "config_info" "udev:/sys/devices/pci0000:00/0000:00:14.0/usb1/1-8/1-8:1.0/input/input15/event9"
[ 2402.268] (II) XINPUT: Adding extended input device "Chicony USB 2.0 Camera" (type: KEYBOARD, id 10)
[ 2402.268] (**) Option "xkb_rules" "evdev"
[ 2402.268] (**) Option "xkb_model" "pc104"
[ 2402.268] (**) Option "xkb_layout" "us"
[ 2402.269] (II) config/udev: Adding input device HDA Digital PCBeep (/dev/input/event6)
[ 2402.269] (II) No input driver specified, ignoring this device.
[ 2402.269] (II) This device may have been added with another device file.
[ 2402.269] (II) config/udev: Adding input device HDA Intel PCH Mic (/dev/input/event7)
[ 2402.269] (II) No input driver specified, ignoring this device.
[ 2402.269] (II) This device may have been added with another device file.
[ 2402.269] (II) config/udev: Adding input device HDA Intel PCH Front Headphone (/dev/input/event8)
[ 2402.269] (II) No input driver specified, ignoring this device.
[ 2402.269] (II) This device may have been added with another device file.
[ 2402.269] (II) config/udev: Adding input device AT Translated Set 2 keyboard (/dev/input/event0)
[ 2402.269] (**) AT Translated Set 2 keyboard: Applying InputClass "evdev keyboard catchall"
[ 2402.269] (II) systemd-logind: got fd for /dev/input/event0 13:64 fd 21 paused 0
[ 2402.269] (II) Using input driver 'evdev' for 'AT Translated Set 2 keyboard'
[ 2402.269] (**) AT Translated Set 2 keyboard: always reports core events
[ 2402.269] (**) evdev: AT Translated Set 2 keyboard: Device: "/dev/input/event0"
[ 2402.269] (--) evdev: AT Translated Set 2 keyboard: Vendor 0x1 Product 0x1
[ 2402.269] (--) evdev: AT Translated Set 2 keyboard: Found keys
[ 2402.269] (II) evdev: AT Translated Set 2 keyboard: Configuring as keyboard
[ 2402.269] (**) Option "config_info" "udev:/sys/devices/platform/i8042/serio0/input/input0/event0"
[ 2402.269] (II) XINPUT: Adding extended input device "AT Translated Set 2 keyboard" (type: KEYBOARD, id 11)
[ 2402.269] (**) Option "xkb_rules" "evdev"
[ 2402.269] (**) Option "xkb_model" "pc104"
[ 2402.269] (**) Option "xkb_layout" "us"
[ 2402.270] (II) config/udev: Adding input device SynPS/2 Synaptics TouchPad (/dev/input/event10)
[ 2402.270] (**) SynPS/2 Synaptics TouchPad: Applying InputClass "evdev touchpad catchall"
[ 2402.270] (**) SynPS/2 Synaptics TouchPad: Applying InputClass "touchpad catchall"
[ 2402.270] (**) SynPS/2 Synaptics TouchPad: Applying InputClass "Default clickpad buttons"
[ 2402.270] (II) LoadModule: "synaptics"
[ 2402.270] (II) Loading /usr/lib/xorg/modules/input/synaptics_drv.so
[ 2402.270] (II) Module synaptics: vendor="X.Org Foundation"
[ 2402.270] compiled for 1.16.0, module version = 1.8.0
[ 2402.270] Module class: X.Org XInput Driver
[ 2402.270] ABI class: X.Org XInput driver, version 21.0
[ 2402.270] (II) systemd-logind: got fd for /dev/input/event10 13:74 fd 22 paused 0
[ 2402.270] (II) Using input driver 'synaptics' for 'SynPS/2 Synaptics TouchPad'
[ 2402.270] (**) SynPS/2 Synaptics TouchPad: always reports core events
[ 2402.270] (**) Option "Device" "/dev/input/event10"
[ 2402.356] (II) synaptics: SynPS/2 Synaptics TouchPad: ignoring touch events for semi-multitouch device
[ 2402.356] (--) synaptics: SynPS/2 Synaptics TouchPad: x-axis range 1472 - 5692 (res 66)
[ 2402.356] (--) synaptics: SynPS/2 Synaptics TouchPad: y-axis range 1408 - 4680 (res 102)
[ 2402.356] (--) synaptics: SynPS/2 Synaptics TouchPad: pressure range 0 - 255
[ 2402.356] (--) synaptics: SynPS/2 Synaptics TouchPad: finger width range 0 - 15
[ 2402.356] (--) synaptics: SynPS/2 Synaptics TouchPad: buttons: left right double triple
[ 2402.356] (--) synaptics: SynPS/2 Synaptics TouchPad: Vendor 0x2 Product 0x7
[ 2402.356] (**) Option "TapButton1" "1"
[ 2402.356] (**) Option "TapButton2" "2"
[ 2402.356] (**) Option "TapButton3" "3"
[ 2402.356] (--) synaptics: SynPS/2 Synaptics TouchPad: touchpad found
[ 2402.356] (**) SynPS/2 Synaptics TouchPad: always reports core events
[ 2402.356] (**) Option "config_info" "udev:/sys/devices/platform/i8042/serio2/input/input11/event10"
[ 2402.356] (II) XINPUT: Adding extended input device "SynPS/2 Synaptics TouchPad" (type: TOUCHPAD, id 12)
[ 2402.356] (**) synaptics: SynPS/2 Synaptics TouchPad: (accel) MinSpeed is now constant deceleration 2.5
[ 2402.356] (**) synaptics: SynPS/2 Synaptics TouchPad: (accel) MaxSpeed is now 1.75
[ 2402.356] (**) synaptics: SynPS/2 Synaptics TouchPad: (accel) AccelFactor is now 0.037
[ 2402.356] (**) SynPS/2 Synaptics TouchPad: (accel) keeping acceleration scheme 1
[ 2402.356] (**) SynPS/2 Synaptics TouchPad: (accel) acceleration profile 1
[ 2402.356] (**) SynPS/2 Synaptics TouchPad: (accel) acceleration factor: 2.000
[ 2402.356] (**) SynPS/2 Synaptics TouchPad: (accel) acceleration threshold: 4
[ 2402.356] (--) synaptics: SynPS/2 Synaptics TouchPad: touchpad found
[ 2402.357] (II) config/udev: Adding input device SynPS/2 Synaptics TouchPad (/dev/input/mouse0)
[ 2402.357] (**) SynPS/2 Synaptics TouchPad: Ignoring device from InputClass "touchpad ignore duplicates"
[ 2402.357] (II) config/udev: Adding input device PC Speaker (/dev/input/event5)
[ 2402.357] (II) No input driver specified, ignoring this device.
[ 2402.357] (II) This device may have been added with another device file.
[ 2447.861] (II) UnloadModule: "synaptics"
[ 2447.861] (II) systemd-logind: releasing fd for 13:74
[ 2447.929] (II) evdev: AT Translated Set 2 keyboard: Close
[ 2447.929] (II) UnloadModule: "evdev"
[ 2447.929] (II) systemd-logind: releasing fd for 13:64
[ 2448.009] (II) evdev: Chicony USB 2.0 Camera: Close
[ 2448.009] (II) UnloadModule: "evdev"
[ 2448.009] (II) systemd-logind: releasing fd for 13:73
[ 2448.089] (II) evdev: Sleep Button: Close
[ 2448.089] (II) UnloadModule: "evdev"
[ 2448.089] (II) systemd-logind: releasing fd for 13:66
[ 2448.129] (II) evdev: Power Button: Close
[ 2448.129] (II) UnloadModule: "evdev"
[ 2448.129] (II) systemd-logind: releasing fd for 13:65
[ 2448.169] (II) evdev: Video Bus: Close
[ 2448.169] (II) UnloadModule: "evdev"
[ 2448.169] (II) systemd-logind: releasing fd for 13:75
[ 2448.209] (II) evdev: Power Button: Close
[ 2448.209] (II) UnloadModule: "evdev"
[ 2448.209] (II) systemd-logind: releasing fd for 13:68
[ 2449.439] (EE) Server terminated successfully (0). Closing log file.
Last edited by undersound (2014-08-19 18:00:15)
Offline
You can try arandr it might help to manage xrandr little better. It is a gui for xrandr and can save settings as bash scripts with per-configured command line for xrandr, for faster change/set up of wanted resolutions.
Help to make Arora bug free!!
日不落 | Year 2081 | 笑傲江湖 | One more a really good book in my collection the Drystoll.
Offline
Thanks for the advice. Unfortuantely it doesn't fix the problem
Offline
I guess my monitor is not sending the appropriate EDID but I am not sure what's the best way to continue now.
To confirm that, run
xrandr --props
You can use 'edid-decode' from AUR git package to read the information, post it here anyway.
Offline
Thanks, here are both outputs:
xrandr --props:
Screen 0: minimum 8 x 8, current 3600 x 1080, maximum 32767 x 32767
eDP1 connected 1920x1080+0+0 (normal left inverted right x axis y axis) 344mm x 193mm
_MUTTER_PRESENTATION_OUTPUT: 0
EDID:
00ffffffffffff0006afed1200000000
00160104952213780206959757539426
1c505400000001010101010101010101
0101010101013c3780c0703820403064
8e0058c110000018d32480c070382040
30648e0058c110000018000000fe0041
554f0a202020202020202020000000fe
004231353648414e30312e32200a0022
BACKLIGHT: 4882
range: (0, 4882)
Backlight: 4882
range: (0, 4882)
scaling mode: Full aspect
supported: None, Full, Center, Full aspect
Broadcast RGB: Automatic
supported: Automatic, Full, Limited 16:235
audio: auto
supported: force-dvi, off, auto, on
1920x1080 60.21*+ 40.14
1400x1050 59.98
1280x1024 60.02
1280x960 60.00
1024x768 60.00
800x600 60.32 56.25
640x480 59.94
VGA1 connected primary 1680x1050+1920+0 (normal left inverted right x axis y axis) 518mm x 324mm
EDID:
00ffffffffffff0038a34e6701010101
101401030e342078ea9ef5a6564b9a25
125054bfef8081c0818090408bc09500
a940b300d1007d4b80a072b02d4088c8
360006442100001c000000fd00384c1f
5214000a202020202020000000fc0045
41323431574d0a2020202020000000ff
0030343130393234304e420a20200082
_MUTTER_PRESENTATION_OUTPUT: 0
1600x1200 60.00
1680x1050 59.95*
1400x1050 59.98
1280x1024 75.02 60.02
1440x900 59.89
1360x768 59.80
1152x864 75.00
1280x720 59.97
1024x768 75.08 70.07 60.00
832x624 74.55
800x600 72.19 75.00 60.32 56.25
640x480 75.00 72.81 66.67 60.00
720x400 70.08
HDMI1 disconnected (normal left inverted right x axis y axis)
Broadcast RGB: Automatic
supported: Automatic, Full, Limited 16:235
audio: auto
supported: force-dvi, off, auto, on
VIRTUAL1 disconnected (normal left inverted right x axis y axis)
xrandr --props | edid-decode:
Extracted contents:
header: 00 ff ff ff ff ff ff 00
serial number: 06 af ed 12 00 00 00 00 00 16
version: 01 04
basic params: 95 22 13 78 02
chroma info: 06 95 97 57 53 94 26 1c 50 54
established: 00 00 00
standard: 01 01 01 01 01 01 01 01 01 01 01 01 01 01 01 01
descriptor 1: 3c 37 80 c0 70 38 20 40 30 64 8e 00 58 c1 10 00 00 18
descriptor 2: d3 24 80 c0 70 38 20 40 30 64 8e 00 58 c1 10 00 00 18
descriptor 3: 00 00 00 fe 00 41 55 4f 0a 20 20 20 20 20 20 20 20 20
descriptor 4: 00 00 00 fe 00 42 31 35 36 48 41 4e 30 31 2e 32 20 0a
extensions: 00
checksum: 22
Manufacturer: AUO Model 12ed Serial Number 0
Made week 0 of 2012
EDID version: 1.4
Digital display
6 bits per primary color channel
DisplayPort interface
Maximum image size: 34 cm x 19 cm
Gamma: 2.20
Supported color formats: RGB 4:4:4
First detailed timing is preferred timing
Established timings supported:
Standard timings supported:
Detailed mode: Clock 141.400 MHz, 344 mm x 193 mm
1920 1968 2068 2112 hborder 0
1080 1088 1102 1112 vborder 0
-hsync -vsync
Detailed mode: Clock 94.270 MHz, 344 mm x 193 mm
1920 1968 2068 2112 hborder 0
1080 1088 1102 1112 vborder 0
-hsync -vsync
ASCII string: AUO
ASCII string: B156HAN01
Checksum: 0x22 (valid)
EDID block does NOT conform to EDID 1.3!
Missing name descriptor
Missing monitor ranges
Detailed block string not properly terminated
Offline
That EDID was for the eDP1 not VGA1.
Extracted contents:
header: 00 ff ff ff ff ff ff 00
serial number: 38 a3 4e 67 01 01 01 01 10 14
version: 01 03
basic params: 0e 34 20 78 ea
chroma info: 9e f5 a6 56 4b 9a 25 12 50 54
established: bf ef 80
standard: 81 c0 81 80 90 40 8b c0 95 00 a9 40 b3 00 d1 00
descriptor 1: 7d 4b 80 a0 72 b0 2d 40 88 c8 36 00 06 44 21 00 00 1c
descriptor 2: 00 00 00 fd 00 38 4c 1f 52 14 00 0a 20 20 20 20 20 20
descriptor 3: 00 00 00 fc 00 45 41 32 34 31 57 4d 0a 20 20 20 20 20
descriptor 4: 00 00 00 ff 00 30 34 31 30 39 32 34 30 4e 42 0a 20 20
extensions: 00
checksum: 82
Manufacturer: NEC Model 674e Serial Number 16843009
Made week 16 of 2010
EDID version: 1.3
Analog display, Input voltage level: 0.7/0.3 V
Sync: Separate Composite SyncOnGreen
Maximum image size: 52 cm x 32 cm
Gamma: 2.20
DPMS levels: Standby Suspend Off
RGB color display
First detailed timing is preferred timing
Established timings supported:
720x400@70Hz
640x480@60Hz
640x480@67Hz
640x480@72Hz
640x480@75Hz
800x600@56Hz
800x600@60Hz
800x600@72Hz
800x600@75Hz
832x624@75Hz
1024x768@60Hz
1024x768@70Hz
1024x768@75Hz
1280x1024@75Hz
1152x870@75Hz
Standard timings supported:
1280x720@60Hz
1280x1024@60Hz
1400x1050@60Hz
1360x765@60Hz
1440x900@60Hz
1600x1200@60Hz
1680x1050@60Hz
1920x1200@60Hz
Detailed mode: Clock 193.250 MHz, 518 mm x 324 mm
1920 2056 2256 2592 hborder 0
1200 1203 1209 1245 vborder 0
-hsync +vsync
Monitor ranges (GTF): 56-76Hz V, 31-82kHz H, max dotclock 200MHz
Monitor name: EA241WM
Serial number: 04109240NB
Checksum: 0x82 (valid)
Try using these instead:
"1920x1200_60.00" 193.25 1920 2056 2256 2592 1200 1203 1209 1245 -hsync +vsync"1920x1200R" 154.00 1920 1968 2000 2080 1200 1203 1209 1235 +hsync -vsync
You could always try to figure out how Ubuntu or Mint does it.
Here is something that probably will be of interest to you, untested.
Edit: Try setting that resolution with only VGA1 connected, ergo with eDP1 disconnected.
Last edited by emeres (2014-08-19 22:09:26)
Offline
You can try...
Moderator comment:
Andy_Crowd: Posting here hoping to get your attention. Please update your email address - I responded to your PM, but it bounced
Nothing is too wonderful to be true, if it be consistent with the laws of nature -- Michael Faraday
Sometimes it is the people no one can imagine anything of who do the things no one can imagine. -- Alan Turing
---
How to Ask Questions the Smart Way
Offline
That EDID was for the eDP1 not VGA1.
thanks must have copied the wrong one
Can't believe I overlooked on this but I think the whole problem with this is that my screen's native resolution is 1920x1200 and my video card only support up to 1920x1080.
Could this be the problem?
Offline
Can't believe I overlooked on this but I think the whole problem with this is that my screen's native resolution is 1920x1200 and my video card only support up to 1920x1080.
Could this be the problem?
I cannot decide if I should answer "You think?" or simply "Yes.". You do realize that many people invest their free time here? Get really good at something in [Arch] Linux and help others out, then you will be forgiven.
@ewaller try here:
https://bbs.archlinux.org/viewtopic.php?id=185748
Offline
Thanks for your help.
Offline
You are welcome.
Offline
It works.
After trying your suggestion:
"1920x1200_60.00" 193.25 1920 2056 2256 2592 1200 1203 1209 1245 -hsync +vsync
I got the following error message:
BadName (named color or font does not exist)
Which led me to this post.
Summary in there is to use a new name:
xrandr --newmode "newname" 193.25 1920 2056 2256 2592 1200 1203 1209 1245 1080 -hsync +vsync
When enabling the eDP1 screen again via Preferences -> Display I saw 1920x1200 available as an option for the VGA1. Selected that and resolution is properly being applied now.
Thanks again for your time and help.
Last edited by undersound (2014-08-20 20:54:38)
Offline
So this limit was either an understatement from the manufacturer or a reference to a specific part/socket of the video card. You could notice image degradation however. Take a look here.
Offline
|
Seems to be more or less OK. I don’t get the “System GUID at…” here though when I print, just the name.
You can shorten your script considerably though (unless you want the intermediate results as well):
import rhinoscriptsyntax as rs
all_objs = rs.AllObjects()
layernames = set()
for obj in all_objs:
if rs.IsObjectHidden(obj): layernames.add(rs.ObjectLayer(obj))
if layernames:
for layername in layernames: print layername
else:
print "No hidden objects found!"
Why did I use set() for the layer names instead of a list[]? Well, if you have multiple objects hidden on the same layer, each object adds the same name to the list, so you will print the layer multiple times. Unlike a list, a set in Python is composed only of unique members, so there will be only one instance of a layer name in there even if multiple objects are hidden on the same layer. On the other hand a set is “unordered” so you cannot depend on the order the layer names will be printed like you might with a list.
|
I finally got around to working on my Amazon project again.
Misc Notes
# Change postgres data directory
File path:
/etc/postgresql/10/main/postgresql.conf
File System Headache
I decided to clean up my hard drives, but I forgot how much of a headache it was trying to get an NTFS drive to work with transmission-daemon. Whatever I'll just save to my EX4 partition for now and fix it later.
*Update
I bricked my OS install and had to go down a 3 hour nightmare trying to fix it. I eventually discovered that it was a label from my old partition mount point in the fstab file. Solution:
sudo nano /etc/fstab
# comment out old label
ctrl + o to save
ctrl + x to exit
reboot
My computer still doesn't restart properly because I broke something in the boot order trying to fix it. Not a big deal I just enter my username/password in the terminal then type startx.
LexSum Progress
Had to slice to 50 for each rating to save time, but I can probably make it longer for launch. At first I was thinking there would be 60 million entities to process, but actually its more like 900k x 5 (for each rating) and as long as I don't lexsum 1000+ reviews for ratings it should finish in a few days. I reallllly need to add a timer function asap. I can just time 1000 or so products and multiply that by 900k or whatever the total number of products in my database is and I should have a pretty good idea how long it will take.
if len(titles) > 50:
titlejoin = ' '.join(lex_sum(' '.join(titles[:50]), sum_count))
textjoin = ' '.join(lex_sum(' '.join(comments[:50]), sum_count))
else:
titlejoin = ' '.join(lex_sum(' '.join(titles), sum_count))
textjoin = ' '.join(lex_sum(' '.join(comments), sum_count))
I'm thinking I can clean these lines up now that I'm staring at it. Maybe something like:
titlejoin = ' '.join(
lex_sum(' '.join(titles[:min(len(titles), 50)]), sum_count))
textjoin = ' '.join(
lex_sum(' '.join(comments[:min(len(titles), 50)]), sum_count))
My estimated time remaining function adds time elapsed ever ten iterations to a list, takes the last 500 or less of that list and averages them, and finally multiplies that average by the total remaining iterations and displays it in a human readable format:
avg_sec = 0
times = []
start = time.time()
# Display time remaining
if avg_sec:
seconds_left = ((limit - count) / 10) * avg_sec
m, s = divmod(seconds_left, 60)
h, m = divmod(m, 60)
print('Estimated Time Left: {}h {}m {}s'.format(
round(h), round(m), round(s)))
if(not count % 10):
end = time.time()
time_block = end - start
start = end
times.append(time_block)
avg_sec = functools.reduce(
lambda x, y: x + y, times[-min(len(times), 500):]) / len(times[-min(len(times), 500):])
print('Average time per 10:', round(avg_sec, 2), 'seconds')
Another thought I had is that this save_df module I coded (it's at like 400 lines of code already x_x) is actually a crucial part of my ultimate code base. I'm pretty happy that I spent so much time writing it into proper functions.
|
Let's learn Aardvark
Welcome to Aardvark, the language that has entranced programmers by its simplicity and amazingness for the last few days.
My goal is to by the end of this lesson have taught you the basics of the Aardvark language and get you on a course to become an amazing Aardvark developer.
In programming, you usually start with a Hello World program, but let's mix it up this time, lets start by learning how to write a program that takes the user's username as input and outputs a random welcome message. But first, we need to learn basic input and output.
This code will output This is my Aardvark program!:
output("This is my Aardvark program!")
You see, not that hard, now don't forget the quotes, it won't work right without them. Let's look at what this code does, output has parentheses, which means its a function, and then inside the quotes are the message that shows up on the screen. Hmm, I wonder if I can change what's in the quotes and it will change the message, lets try it:
output("This is a different message")
If you run that program you will see that it worked! Now let's learn how to take user input:
input("Enter your username: ")
If you run that code, you will see that it will give that message and then let you type in an answer. But how do we store that answer in our program? We use variables, variables store data for use later in the code. So if we add a = to the beginning of that then it will store the input in the variable a, lets try it:
a = input("Enter your username: ")
How do we know if it worked? Well, let's try to output the data inside a. Try this code:
a = input("Enter your username: ")
output(a)
When you output variables, you don't need those quotes. Let's run it. When we run it, it will ask for our username and then output what we typed in. We can already get their username, we still need the random welcome message, lets first start with just a welcome message. If we output "Welcome, " before we output what they typed in, it would say Welcome, plus their username. Let's try it:
a = input("Enter your username: ")
output("Welcome, ")
output(a)
It worked! Let's simplify it, just do "Welcome, " + a instead of doing it on separate lines. Try this:
a = input("Enter your username: ")
output("Welcome, " + a)
It worked! We now have our username input and our welcome message, but what about the random. How can we make it do something random?
In Aardvark, the tools module has some functions to help us do random stuff. But how do we include a module? Try this code:
#include tools
It makes all the functions in the tools module available in our program, lets add it to our code:
#include tools
a = input("Enter your username: ")
output("Welcome, " + a)
Now, what is the function to do random stuff? In Aardvark, you can use the randomchoice function from the tools module to make random choices. randomchoice takes a list of possible choices as its one argument. How do we make a list in Aardvark? Just put it in between [ and ] and separate the items by commas. Let's try this code:
#include tools
username = input("Enter your username: ")
message = randomchoice("Welcome, ", "Hello, ", "Have a good day, ")
output(message + username)
It worked! We have reached our goal!
|
Let's learn Aardvark
Welcome to Aardvark, the language that has entranced programmers by its simplicity and amazingness for the last few days.
My goal is to by the end of this lesson have taught you the basics of the Aardvark language and get you on a course to become an amazing Aardvark developer.
In programming, you usually start with a Hello World program, but let's mix it up this time, lets start by learning how to write a program that takes the user's username as input and outputs a random welcome message. But first, we need to learn basic input and output.
This code will output This is my Aardvark program!:
output("This is my Aardvark program!")
You see, not that hard, now don't forget the quotes, it won't work right without them. Let's look at what this code does, output has parentheses, which means its a function, and then inside the quotes are the message that shows up on the screen. Hmm, I wonder if I can change what's in the quotes and it will change the message, lets try it:
output("This is a different message")
If you run that program you will see that it worked! Now let's learn how to take user input:
input("Enter your username: ")
If you run that code, you will see that it will give that message and then let you type in an answer. But how do we store that answer in our program? We use variables, variables store data for use later in the code. So if we add a = to the beginning of that then it will store the input in the variable a, lets try it:
a = input("Enter your username: ")
How do we know if it worked? Well, let's try to output the data inside a. Try this code:
a = input("Enter your username: ")
output(a)
When you output variables, you don't need those quotes. Let's run it. When we run it, it will ask for our username and then output what we typed in. We can already get their username, we still need the random welcome message, lets first start with just a welcome message. If we output "Welcome, " before we output what they typed in, it would say Welcome, plus their username. Let's try it:
a = input("Enter your username: ")
output("Welcome, ")
output(a)
It worked! Let's simplify it, just do "Welcome, " + a instead of doing it on separate lines. Try this:
a = input("Enter your username: ")
output("Welcome, " + a)
It worked! We now have our username input and our welcome message, but what about the random. How can we make it do something random?
In Aardvark, the tools module has some functions to help us do random stuff. But how do we include a module? Try this code:
#include tools
It makes all the functions in the tools module available in our program, lets add it to our code:
#include tools
a = input("Enter your username: ")
output("Welcome, " + a)
Now, what is the function to do random stuff? In Aardvark, you can use the randomchoice function from the tools module to make random choices. randomchoice takes a list of possible choices as its one argument. How do we make a list in Aardvark? Just put it in between [ and ] and separate the items by commas. Let's try this code:
#include tools
username = input("Enter your username: ")
message = randomchoice("Welcome, ", "Hello, ", "Have a good day, ")
output(message + username)
It worked! We have reached our goal!
|
Dynamix
Dynamix (aka Introduction’ Intro of Dynamix on N64) is a N64 homebrew demo made by Immortal and Widget and published in December 1997. The demo is a marquee/bitmap style, with a floating 2D logo, some rotating 3D characters and scrolling text.
You can download the ROM for Dynamix as well as its associated files here by using the password “dnxintro”.
Dextrose description
Bitmap-Logo, 3D-Logo, Scroller, Music
Dextrose
Transcript
Hiya and welcome to the first dynamix demo on the nintendo 64!!! since this is our first release we'd like to introduce ourselves. Currently Dynamix consists of two members: Immortal and Widget. You are probably wondering why we chose the name 'Dynamix', besides the fact that the name has a nice ring to it, it has been a very cool group in PC scene with people like Bluewater, Razor, Sharp, Hoson, etc… As some of you may know this demo has been delayed some weeks. We don't wanna bore you with details but besides lack of time and stupid typo's. It's mostly due to the Doc64 lack of quality. Widget's Doc currently has a corrupt memory module and a exploded power supply. So he'll probably won't see the final version of this demo till next week. 'Long live vd64' my ass. Still it's the best equipment for developing, let's wait and see what the Z64 can do. It's time for demo credits: Immortal coded the 3D object part and the GL2C converter, Widget coded the 2D sprite part and PPM2C convertor. And featuring as guest coder we have Lac!!! He coded and let us use Lacmod!!! The S3M module you are listening to is called 'Ode 2 Jodie Sweetin' and is composed by Cygnes. Ok ok ok. It's time for the greetings and hello's, all in non-logical order: Lac, Stan, Twinnie, Locke, Nagra, Jovis, Jihad, Uxorious, Fractal, Datawiz, Oman, Silo, Kid Stardust, Rene, Sispeo, Titanik, Wildfire, Steve, Fusionman, Stumble, Tazdevil, Speedy-G, Wizard, RIP, Conmango, Superdoc, Matzer, Nil, Fredro and ofcoz Lac hehehehe.. That's all for this introduction, if you wanna contact us, Join N64dev on Efnet IRC. Wrap
Dynamix.nfo
|_. _/..\/ |/__...| | /.\_\/:: | / | | _. | : |/ | | \ / | |\ / | : \ | | \ |/ : :\ \/ | |/ : \ |__|__| |/|||/ |_RD +-------------_/------__|-;Presents+-__|-----------+ The 'Introduction' Intro of Dynamix of the Nintendo 64! Coded by: Immortal and Widget Rel.date: 7th of December 1997 Hiya, This introduction isn't 100% complete but we wanted to release it asap after already being delayed for weeks. So check out the rom and remember this demo is just a 'Hello world, there is a new group on the block'. The demo contains some n64 fundamentals: 2d,3d and sound. 2d (sprites) part consists of the scroller and logo. The logo is drawn by Fractal. 3d part consists of the rotating dnx logo. For sound we use LaC's excellent lacmod player. The .S3M module is composed by Cygnus. Greetings to: lac,stan,twinnie,locke,nagra,jovis,jihad,uxorious, fractal,datawiz,oman,silo,kidstardust,rene,sispeo, titanik,wildfire,steve,fusionman,stumble,tazdevil, speedy-g,wizard,rip,conmango,superdoc and matzer. Cheers. -Immortal & Widget Message to LaC : We used transparency in the DNX 3d-part +--------------------------------------------------------+
FILE_ID.DIZ
<<www.dextrose.com>>|_. _/..\/ |/__...| | /.\_\/:: | / | | _. | : |/ | | \ / | |\ / | : \ | | \ |/ : :\ \/ | |/ : \ |__|__| |/|||/ |_RD +------_/------__|-;Presents+-__|---+ The 'Introduction' Intro of Dynamix on N64! +-----------------------------------------+
WARP9.FST
______ ________ ________ ________ ________
_____\ \ _\____ |_\___ \_\___ \ / ___/_ / /\
\ / | / / /____/ \_____ |
\____________/_________|____\ \____| |____|
\____\
This is a demo in the same vein as all the others that deal with introductions. In the Dextrose days, people would often introduce themselves to the community by making a demo to show that they are worthy, and this is the one for the Dynamix group.
At the time of release, this ROM announced that Dynamix consisted of Immortal and Widget, but it is implied that they left it open for other people to join if they wanted.
The demo itself is there to showcase that they can set up the basics of maing an Nintendo 64 ROM: 2D visuals, 3D models and sound. It seems to have completed that goal pretty well, even though they’re fairly basic.
The ROM mentions that the ROM has been delayed due to technical issues, and the accompanying text says that there will be an update in a week. As far as I know, there is no other publicly released version of the Intro.
There are a few other small parts that other people played such as Fractal doing the logo design and LaC doing the lacmod player. The music playing throughout the ROM is called Ode 2 Jodie Sweetin by Cygnes, released in 1996.
I think that the ROM is a very good first attempt. The music is nice, the graphics show that they are capable and it loops nicely. The only thing I’d change is that the text seems to me to be a bit too focused on what was happening at that moment rather than trying to make something a bit more timeless.
Overall Dynamix is a good ROM to start your group out with.
|
3D Projection Tutorial
I am going to teach you how to draw a 3D object.
I'm going to start with some useful information about the project bellow. First, it takes about 10 - 15 seconds to generate the map. Second, use WASD to move left, right, forward, and down and use space and right shift(not left shift) to go up and down.
To start, the only type math used in this tutorial is linear equations. You will still be able to use this tutorial and render your own 3D objects if you don't know linear equations but it may be more challenging and you likely won't understand how 3D projections work. Ok, now that were done with that, lets figure out how to project a 3D point onto a plane(a flat 2D surface). The reason I used the word project is because you take the point in 3D space, than you draw a line to the player's eye. From here you find the x and y point at z 80. Any position at z 80 is going to be referred to as the canvas. The number 80 after z would be represented by a variable player_fov aka the players field of view. This variable will be used in the equations. Than from there you would draw that point at that position on the screen. The way to sum that up would be your projecting a beam of light from a 3D point and seeing where it hits the players eye. Now lets get into the math that projects the object.
Here's a good image that helps shows what my program dose to project a vector3(x, y, z) point onto the canvas. The link to the website the image was taken from is right bellow the image and it also was the most helpful website for me when it came to learning how 3D projections work:Computing the Pixel Coordinates of a 3D Point
And this gif further explains it:
Step 1, the first step is to remove b(the y intercept) from the equation:
y = mx + b
This will make projecting the object easier. To do this I am going to use a few variables, object_x(the x position of the object), object_y(the y position of the object), object_z(the z position of the object) and player_x(the players x position), player_y(the players y position), player_z(the players z position). The outputs will be new_object_x, new_object_y, and new_object_z. Those outputs will be used in all the equations. Now onto the equation to remove b:
new_object_x = player_x - object_x
new_object_y = player_y - object_y
new_object_z = player_z - object_z
Step 2, now we will find the y position of collision on the canvas(defined at the top). The output will be screen_y and the variable m is the slope. The equation to do this is:
m = new_object_y / new_object_z
screen_y = m * player_fov + object_y
Step 3, this is the final step and gets the x position of collision on the canvas. The output will be screen_x and the variable m is the slope. The equation is:
m = new_object_x / new_object_z
screen_x = m * player_fov + object_x
If you don't want to write any code yourself than this is the code to project a 3D point written in python3:
def convert_3D_to_2D(vec3_pos):
vec3_dist = [vec3_player_pos[0] - vec3_pos[0], vec3_player_pos[1] - vec3_pos[1], vec3_player_pos[2] - vec3_pos[2]]
try:
m = vec3_dist[1] / vec3_dist[2]
except ZeroDivisionError:
m = 0
y = m * player_fov + vec3_pos[1]
try:
m2 = vec3_dist[0] / vec3_dist[2]
except ZeroDivisionError:
m2 = 0
y2 = m2 * player_fov + vec3_pos[0]
return [y2, y]
You now are done with the math! And congratulations on finishing the tutorial! I hope this worked for you and if you have any questions or need help than feel free to ask in the comments.
Here's some images of a 3D map made using perline noise rendered using this 3D projection method and ran without a maximum render distance on my computer(not on repl):
And here's some more images of the world(not ran on repl) using my ray tracing lighting system that traces a ray from each point to the sun and check to see if it collides with anything. This feature is to slow for repl and will not be implemented. In this image I was getting around 0.3 fps:
The project bellow is the same code as used to create those images except repl runs pygame very slowly and therefore I had to implement a maximum render distance to allow you to move around the map in real time. Another feature in my project is a smoothish lighting system. Basically I subtract the height of one of the three points on the triangle being rendered from another. Than I used the clamp function shown bellow to limit how much darker or brighter an object could be. This function is also written in python3:
def clamp(value, min_, max_):
return min(max(value, min_), max_)
After doing this I add the new value that I generated to the r, g, and b color before clamping each of the values to be within 0 - 255 using the same function shown above. I also have one more tip that may help, only draw a triangle if all of the three points x and y positions are grater than 0 otherwise there will be weird lines when you pass an object. One final feature in my program is a noise function. This creates the terrain that is rendered to the screen smoothly. First off, this is not an actual perline noise algorithm but it still creates smooth terrain. The way this function works on a basic level is it goes from the bottom left to the top right and on the way it gets the average height level of the terrain around it(If there is no terrain around it than it sets its height to a random value). It than choses a random value from a range of values till the value is a certain range(this range is random making the heights of the terrain even more random) away from the average terrain height. When it finds this height than it adds the height to a 2D array. I also added a chance to spawn a peak and if it dose than the height is forced to be a lot higher than the average creating a jump in the height of the terrain around it. That's all it takes to recreate the noise function. A tip to create a 3D game is to use a mesh(a list of shape positions that creates and object when rendered) and use a triangle as the shape it renders. Another useful tip is to order the terrain from top left to top bottom so when rendered the terrain will not overlap and create a weird visual bug. I will be updating this tutorial once I added in camera rotation so you to can look around you world at more angles than now. I have also started a ray tracer and have put the prototype on repl, the link is bellow. And finally, with all this new found information, you should be well enough equipped to create your own 3D game or game engine from scratch. Good luck!
Interesting facts:
There are 22500 polygons(triangles) in each map
There are 67500 vector3(x, y, z) points in the entire map
Good sources from learning more about 3D projections and the math behind it:
|
blob: da20c004bf31544522e7d7e5b549d2d909daa93b (
plain
)
# Copyright (C) 2009, Tutorius.org
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation; either version 2 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
import unittest
from sugar.tutorius.store import *
g_tutorial_id = '114db454-b2a1-11de-8cfc-001f5bf747dc'
g_other_id = '47efc6ee-b2a3-11de-8cfc-001f5bf747dc'
class StoreProxyTest(unittest.TestCase):
def setUp(self):
self.store = StoreProxy()
def tearDown(self):
pass
def test_get_categories(self):
categories = self.store.get_categories()
assert isinstance(categories, list), "categories should be a list"
def test_get_tutorials(self):
self.store.get_tutorials()
def test_get_tutorial_collection(self):
collection_list = self.store.get_tutorial_collection('top5_rating')
assert isinstance(collection_list, list), "get_tutorial_collection should return a list"
def test_get_latest_version(self):
version_dict = self.store.get_latest_version([])
assert isinstance(version_dict, dict)
def test_download_tutorial(self):
tutorial = self.store.download_tutorial(g_tutorial_id)
assert tutorial is not None
def test_login(self):
assert self.store.login("unknown_user", "random_password")
def test_register_new_user(self):
user_info = {
'name' : "Albert",
'last_name' : "The Tester",
'location' : 'Mozambique',
'email' : 'albertthetester@mozambique.org'
}
assert self.store.register_new_user(user_info)
class StoreProxyLoginTest(unittest.TestCase):
def setUp(self):
self.store = StoreProxy()
self.store.login("unknown_user", "random_password")
def tearDown(self):
session_id = self.store.get_session_id()
if session_id is not None:
self.store.close_session()
def test_close_session(self):
assert self.store.close_session()
def test_get_session_id(self):
session_id = self.store.get_session_id()
assert session_id is not None
def test_rate(self):
assert self.store.rate(5, g_tutorial_id)
def test_publish(self):
# TODO : We need to send in a real tutorial loaded from
# the Vault
assert self.store.publish(['This should be a real tutorial...'])
def test_unpublish(self):
# TODO : We need to send in a real tutorial loaded from
# the Vault
self.store.publish([g_tutorial_id, 'Fake tutorial'])
assert self.store.unpublish(g_tutorial_id)
def test_update_published_tutorial(self):
# TODO : Run these tests with files from the Vault
self.store.publish([g_tutorial_id, 'Fake tutorial'])
assert self.store.update_published_tutorial(g_tutorial_id, [g_tutorial_id, 'This is an updated tutorial'])
|
I am interested in doing a 2D numerical integration. Right now I am using the scipy.integrate.dblquad but it is very slow. Please see the code below. My need is to evaluate this integral 100s of times with completely different parameters. Hence I want to make the processing as fast and efficient as possible. The code is:
import numpy as np
from scipy import integrate
from scipy.special import erf
from scipy.special import j0
import time
q = np.linspace(0.03, 1.0, 1000)
start = time.time()
def f(q, z, t):
return t * 0.5 * (erf((t - z) / 3) - 1) * j0(q * t) * (1 / (np.sqrt(2 * np.pi) * 2)) * np.exp(
-0.5 * ((z - 40) / 2) ** 2)
y = np.empty([len(q)])
for n in range(len(q)):
y[n] = integrate.dblquad(lambda t, z: f(q[n], z, t), 0, 50, lambda z: 10, lambda z: 60)[0]
end = time.time()
print(end - start)
Time taken is
212.96751403808594
This is too much. Please suggest a better way to achieve what I want to do. I have read quadpy can do this job better and very faster but I have no idea how to implement the same. Also, I tried to use cython-prange but scipy doesn't work without gil. I tried numba but again it didn't work for scipy. Please help.
|
Closed (moved)
Open
tor_bug_occurred_: Bug: src/or/entrynodes.c:1845: select_entry_guard_for_circuit: Non-fatal assertion !(!guard_has_descriptor(guard)) failed.
using public obfs4 bridges.
Feb 08 12:13:40.000 [warn] tor_bug_occurred_: Bug: src/or/entrynodes.c:1845: select_entry_guard_for_circuit: Non-fatal assertion !(!guard_has_descriptor(guard)) failed. (on Tor 0.3.0.3-alpha bb2ea3642d54ff03)
Feb 08 12:13:40.000 [warn] Bug: Non-fatal assertion !(!guard_has_descriptor(guard)) failed in select_entry_guard_for_circuit at src/or/entrynodes.c:1845. Stack trace: (on Tor 0.3.0.3-alpha bb2ea3642d54ff03)
Feb 08 12:13:40.000 [warn] Bug: 0 tor 0x000000010f4e3f98 log_backtrace + 73 (on Tor 0.3.0.3-alpha bb2ea3642d54ff03)
Feb 08 12:13:40.000 [warn] Bug: 1 tor 0x000000010f4f852b tor_bug_occurred_ + 268 (on Tor 0.3.0.3-alpha bb2ea3642d54ff03)
Feb 08 12:13:40.000 [warn] Bug: 2 tor 0x000000010f40f587 entry_guard_pick_for_circuit + 307 (on Tor 0.3.0.3-alpha bb2ea3642d54ff03)
Feb 08 12:13:40.000 [warn] Bug: 3 tor 0x000000010f4732b5 guards_choose_guard + 138 (on Tor 0.3.0.3-alpha bb2ea3642d54ff03)
Feb 08 12:13:40.000 [warn] Bug: 4 tor 0x000000010f400c16 circuit_establish_circuit + 2261 (on Tor 0.3.0.3-alpha bb2ea3642d54ff03)
Feb 08 12:13:40.000 [warn] Bug: 5 tor 0x000000010f411f1b circuit_build_needed_circs + 751 (on Tor 0.3.0.3-alpha bb2ea3642d54ff03)
Feb 08 12:13:40.000 [warn] Bug: 6 tor 0x000000010f47f4ef second_elapsed_callback + 811 (on Tor 0.3.0.3-alpha bb2ea3642d54ff03)
Feb 08 12:13:40.000 [warn] Bug: 7 libevent-2.1.6.dylib 0x000000010f641127 event_process_active_single_queue + 1262 (on Tor 0.3.0.3-alpha bb2ea3642d54ff03)
Feb 08 12:13:40.000 [warn] Bug: 8 libevent-2.1.6.dylib 0x000000010f63d9d6 event_base_loop + 1189 (on Tor 0.3.0.3-alpha bb2ea3642d54ff03)
Feb 08 12:13:40.000 [warn] Bug: 9 tor 0x000000010f47eed4 do_main_loop + 1118 (on Tor 0.3.0.3-alpha bb2ea3642d54ff03)
Feb 08 12:13:40.000 [warn] Bug: 10 tor 0x000000010f4810cd tor_main + 235 (on Tor 0.3.0.3-alpha bb2ea3642d54ff03)
Feb 08 12:13:40.000 [warn] Bug: 11 tor 0x000000010f3e9775 main + 25 (on Tor 0.3.0.3-alpha bb2ea3642d54ff03)
Feb 08 12:13:40.000 [warn] Bug: 12 libdyld.dylib 0x00007fffbc194255 start + 1 (on Tor 0.3.0.3-alpha bb2ea3642d54ff03)
To upload designs, you'll need to enable LFS and have admin enable hashed storage. More information
|
Scatter plots are useful to show data points that lie in 2D. Drawing a scatter plot in Matplotlib is easy using the scatter function.
(N, 1), drawing a scatter plot is straightforward:
import matplotlib.pyplot as mplot
# x_vals is NumPy array of shape (N, 1)
# y_vals is NumPy array of shape (N, 1)
mplot.scatter(x_vals, y_vals)
By default, markers that are filled discs, that is of type o, are drawn. This can changed to any of the other markers available in Matplotlib using the marker input argument. The different markers of Matplotlib are shown here.
The size of the marker is 20 by default. It can be changed by setting the s input argument.
The edge or border of the markers are drawn in black by default. If you do not want the edges to be drawn, then set the edgecolors input argument to none (a string, not the None value).
The color that is filled inside the marker is called the face color. It can be set by using the facecolors input argument. For setting the RGB values of N points, pass a NumPy array of shape (N, 3) where each color value lies in (0, 1).
If a dense set of data points is being drawn, multiple markers could obscure each other. This situation can be improved by adding some transparency to the markers. This can be set using the alpha input argument.
An example that uses all the above customizations to draw the figure shown above:
import matplotlib.pyplot as mplot
# x_vals is NumPy array of shape (N, 1)
# y_vals is NumPy array of shape (N, 1)
# c_arr is NumPy array of shape (N, 3)
mplot.scatter(x_vals, y_vals, s=2, marker=".", facecolors=c_arr, edgecolors="none", alpha=0.5)
Tried with: Matplotlib 1.3.1 and Ubuntu 14.04
|
The wsrf is a parallel implementation of the Weighted Subspace Random Forest algorithm (wsrf) of Xu et al. A novel variable weighting method is used for variable subspace selection in place of the traditional approach of random variable sampling. This new approach is particularly useful in building models for high dimensional data—often consisting of thousands of variables. Parallel computation is used to take advantage of multi-core machines and clusters of machines to build random forest models from high dimensional data with reduced elapsed times.
The package ships with a html vignette including more details and a few examples.
Currently, wsrf requires R (>= 3.3.0), Rcpp (>= 0.10.2). For the use of multi-threading, a C++ compiler with C++11 standard support of threads (for example, GCC 4.8.1) is required. Since the latest version of R has added support for C++11 on all operating systems, we do not provide support for the old version of R and C++ compiler without C++11 support. To install the latest version of the package, from within R run:
R> install.packages("wsrf")
Previous version of wsrf provide support on systems without C++11 or using Boost for multithreading. Though we do not provide support for these options anymore, but still leave the usage here for someone with needs of previous version of wsrf. The choice is available at installation time depending on what is available to the user:
# To install previous version of wsrf without C++11
R> install.packages("wsrf", type = "source", configure.args = "--enable-c11=no")
# To install previous version of wsrf with Boost for multithreading
R> install.packages("wsrf",
+ type = "source",
+ configure.args = "--with-boost-include=<Boost include path>
--with-boost-lib=<Boost lib path>")
After installation, one can use the built-in function wsrfParallelInfo to query whether the version installed is what they really want (distributed or multi-threaded).
R> wsrfParallelInfo()
GPL (>= 2)
|
webサイトのトップページなどに多くみられる、画像やコンテンツを左右に切り替える、スライダー slider(カルーセル carouselとも)。
jQueryプラグインとしてさまざまなスライダーが公開・提供されていますが、中でも設定の簡単さとオプションの充実度からお勧めしたいのが今回解説する『slick』です。
今回はその設置手順とサンプルを交えた使い方と各種オプションの詳細をデモを交えて解説していこうと思います。
索引
『slick』の主な機能と特徴
まずはslickの主な機能と特徴をピックアップ。
レスポンシブ対応
ブレイクポイントごとの設定振り分けが可能
タッチデバイスに対応
IE8以上のブラウザで完全動作
縦・横カーセルに対応
画像遅延ロード機能
MITライセンス
基本的な機能はもちろん、モバイル環境への対応内容などを見ても、中々使い勝手が良いスライダーの香りがしませんか?
設置手順を解説
オフィシャルサイトから一式のダウンロードとデータの展開
それでは、slickの設置方法を解説します。
まずは『slick』のオフィシャルサイトからデータ一式をダウンロードしましょう。
ページ上部のメニューにある『get it now』をクリックします。
推移したページで、『Download Now』をクリック。ローカルにデータ一式のzipファイルを保存します。
ダウンロードしたzipファイルを解凍し、その中に格納されているファイルを使用します。
「slick.css」「slick.min.js」の2つがあれば最低限、slickを動作させることができますが、slick既存の公式テーマを使う場合はさらに「fonts」「ajax-loader.gif」「slick-theme.css」の3つも必要です。
「fonts」の中には、デフォルトの矢印ボタン用のフォントが格納されています。
必要最低限の記述で『slick』スライダーを設置
動作に必要なHTMLの記述は下記の通りです。
まずはhead要素内でjQueryと、slick、slickのcssとテーマcssをhead要素内で読み込みます。
※パスは適宜変更してください
<script type="text/javascript" src="./js/jquery-1.12.4.min.js"></script>
<script type="text/javascript" src="./js/jquery-migrate-1.4.1.min.js"></script>
<script type="text/javascript" src="./js/slick.min.js"></script>
<link rel="stylesheet" type="text/css" href="./css/slick.css" media="screen" />
<link rel="stylesheet" type="text/css" href="./css/slick-theme.css" media="screen" />
続いてスライダー部分のhtmlをbody要素内の任意の場所に記述します。
<div class="slick-box">
<figure><img src="img/img_slide_01.gif" width="" height="" alt=""/></figure>
<figure><img src="img/img_slide_02.gif" width="" height="" alt=""/></figure>
<figure><img src="img/img_slide_03.gif" width="" height="" alt=""/></figure>
<figure><img src="img/img_slide_04.gif" width="" height="" alt=""/></figure>
</div>
最後に同じく設定をjQueryで記述。最低限の記述は下記だけでOKです。
※赤く示した箇所のセレクタは前述のhtmlで付与したのクラス(またはID)を指定します。
<script type="text/javascript">
$(function() {
$('.slick-box').slick();
});
</script>
ひとまず、以上で動作します。
PCならマウスポインタで、スマートフォンでご欄の方はスワイプで画像が切り替わるのを確認できます。
slickは、jQueryで指定したセレクタと、html側の親要素のclass(またはID)が合致していて、記述に問題がなければ、同要素内の子要素がスライダーになりますのでulとliである事は必須ではありません。 仮導入やデバッグも楽です。
複数画像を表示する『slick』スライダーの例
下記の記述をする事で複数画像を表示するタイプのスライダーとして動作させる事ができます。
<script type="text/javascript">
$(function() {
$('.slick-box2').slick({
infinite: true,
slidesToShow: 3,
slidesToScroll: 3
});
});
</script>
slidesToShowオプションは、同時に表示される画像の枚数を設定。slidesToScrollは何枚分の画像を移動させるか、という設定です。 infiniteはtrueを指定する事で、最後のスライドの次は最初のスライドへ……と、無限ループしてくれます。
レスポンシブ対応の『slick』設定
responsiveの中に、breakpointとsettingのオプションを記述します。
breakpointで横幅のブレイクポイントを設定し、さらにsettingsでそのブレイクポイント以下の幅で表示させる枚数などの設定をします。複数のブレイクポイントを設けるときは幅の大きい方から記述します。
具体的にはこのような感じです。幾つかの他のオプションも追加しています。
<script type="text/javascript">
$(function() {
$('.slick-box3').slick({
dots: true, // スライダー下部に表示される、ドット状のページネーションです
infinite: true, // 無限ループ
speed: 300, // 切り替わりのスピード
slidesToShow: 4, //通常 1024px以上の領域では4画像表示
slidesToScroll: 4,
responsive: [{
breakpoint: 1024,settings: { //601px~1024pxでは3画像表示
slidesToShow: 3,
slidesToScroll: 3,
}
},
{
breakpoint: 600,settings: { //481px~600pxでは2画像表示
slidesToShow: 2,
slidesToScroll: 2
}
},
{
breakpoint: 480,settings: {//480px以下では1画像表示
slidesToShow: 1,
slidesToScroll: 1
}
}]
});
});
</script>
『slick』のセンターモード
centerModeオプションをtrueに設定するとセンターモードになります。
さらにcenterPaddingを設定する事で表示されているスライドの前後の要素が一部はみ出ているような形で表示されます。記述例は以下のような形です。
<script type="text/javascript">
$(function() {
$('.slick-box4').slick({
centerMode: true, //センターモード
centerPadding: '60px', //前後のパディング
autoplay: true, //オートプレイ
autoplaySpeed: 2000, //オートプレイの切り替わり時間
slidesToShow: 3,
responsive: [{
breakpoint: 768,
settings: {
arrows: false, // 前後の矢印非表示
centerMode: true,
centerPadding: '40px',
slidesToShow: 3
}
},
{
breakpoint: 480,
settings: {
arrows: false,
centerMode: true,
centerPadding: '40px',
slidesToShow: 1
}
}]
});
});
</script>
その他 設定可能オプションの一覧
デモを見ていただくと分かるように、特殊な独自カスタマイズをせずとも、痒い所に手が届くカスタマイズが可能な『slick』。如何でしょうか。
以下に、公式オプションの一覧を掲載いたしますのでgithubのドキュメントと併せてご確認ください。
豊富なオプションに加え、テーマやスタイルシートもカスタマイズする事でさらに表現方法は広がります。ぜひ色々お試しください。
オプション名 タイプ 初期値 概要
accessibility trueまたはfalse true キーボードの左右ボタンで画像の切り替えを可能にするか否か
adaptiveHeight trueまたはfalse false スライダーの高さを現在のスライドに合わせる
autoplay trueまたはfalse true 自動再生するか否か
autoplaySpeed 数値(ミリ秒) 3000 自動再生時の移動までの時間
arrows trueまたはfalse true スライダーの左右に「進む」「戻る」ボタンをつけるか否か
asNavFor 文字列 null ページ内の他のスライドと連携動作させたいとき、そのclassを記載
appendArrows 文字列 $(element) 矢印ボタンの生成位置を変更
(Selector, htmlString, Array, Element, jQuery object)
prevArrow 文字列
(html|jQuery selector)
object (DOM node|jQuery object)
<button type=”button” class=”slick-prev”>Previous</button> 「戻る」ボタンのhtmlを指定
nextArrow 文字列
(html|jQuery selector)
object (DOM node|jQuery object)
<button type=”button” class=”slick-next”>Next</button> 「進む」ボタンのhtmlを指定
centerMode trueまたはfalse false 表示スライドを中央寄せにするか否か
centerPadding 文字列 ’50px’ センターモードのpaddinfを指定。px又は%
cssEase ‘ease’
‘linear’
‘ease-in’
‘ease-out’
‘ease-in-out’
‘ease’ スライド切り替え時のアニメーション
customPaging function n/a デフォルトでドット状のページ送りを、サムネイルなどに変更
例)
dots trueまたはfalse false スライダーの下のページ送りを表示するか否か
dotsClass 文字列 ‘slick-dots’ スライダーの下のページ送りのclass名指定
draggable trueまたはfalse true ドラッグが可能か否か
fade trueまたはfalse false スライド切り替え時はフェード処理するか否か
focusOnSelect trueまたはfalse false
easing 文字列 ‘linear’ イージングを指定
edgeFriction integer 0.15 無限スライドしない場合に端のスライドからさらに進もうとした場合のバウンド処理の動作秒数
infinite trueまたはfalse true 最後のスライドの次に最初のスライドを表示し、無限にスライドできるようにするか否か
initialSlide integer 0 最初のスライダーの位置
lazyLoad ‘ondemand’または’progressive’ ‘ondemand’ 画像の遅延表示処理『lazyload』への対応。
img 属性のdata-lazyで画像アドレスを指定。
スライドした領域が表示されるタイミングで読み込みを開始する。
例)
<img data–lazy=“img/lazyfonz1.png”/>
progressiveを指定すると初期画面の描画後、スライドした画像が表示される事前に読み込む処理が行われる。
mobileFirst trueまたはfalse false responsive オプション設定時にモバイル設定を初期値にする
pauseOnFocus trueまたはfalse true 自動再生時、スライドダーにフォーカスしている時一時停止するか否か
pauseOnHover trueまたはfalse true 自動再生時、スライダーにマウスオンで一時停止するか否か
pauseOnDotsHover trueまたはfalse false 自動再生時、ドット状のページ送りにマウスオンで一時停止するか否か
respondTo ‘window’、’slider’または’min’ ‘window’ responsive 設定時のwidth測定をどこに設定するか。より狭い方が優先。
responsive object none レスポンシブの設定を記述
rows int 1 1以上に設定するとグリッドモードが初期化される。
別途 slidesPerRow オプションを使用して、各行に含めるスライドの数を設定
slide element ”
slidesPerRow 数値 1 rows オプションでグリッドモードを初期化した際に有効。各グリッド行にいくつのスライドがあるかを設定
例)
slidesToShow 数値 1 同時に表示される画像の枚数を設定
slidesToScroll 数値 1 何枚分の画像を移動させるかの設定
speed 数値(ミリ秒) 300 スライド切り替え時の時間を指定
swipe trueまたはfalse true スワイプに対応するか否か
swipeToSlide trueまたはfalse false 表示画像数が少ない場合などにもスワイプ処理を実行するか否か
touchMove trueまたはfalse true タッチムーブ機能を有効にするか否か
touchThreshold 数値 5 スライダーのスライド処理に対して 1/この値 分の距離操作すると、前後に切り替わる?(調査中)
useCSS trueまたはfalse true 切り替わり処理などでCSS Transitionsを有効にするか否か
useTransform trueまたはfalse true 切り替わり処理などでCSS Transformsを有効にするか否か
variableWidth trueまたはfalse false スライド毎の横幅を可変にするか否か。
※違うサイズの画像が含まれるスライダーを実装したい時はtrueに
vertical trueまたはfalse false 縦にスライドするか否か
verticalSwiping trueまたはfalse false 縦方向のスワイプに対応するか否か
rtl trueまたはfalse false Right to Left 通常htmlの記述順で左から右へと表示されるが、右から左へと変更する。
※スライドさせる箇所の親要素にdir=”rtl“を記述する必要が。
例)
<ul class=”slick-box” dir=”rtl“>
※その他注意事項など
複数のスライダーを同一ページに設置する場合はクラス名を変更する事。
矢印などのフォントがどうにも表示されない場合。コンソールに” CORS policy: No ‘Access-Control-Allow-Origin’ header is…”的なエラーが出ている場合、.htaccessの設定が必要な事があり。
解決方法など”BootstrapのGlyphiconsが表示されない対処法(クロスオリジンリクエスト) – Qiita”を参考にしました
まとめ ライセンスについて
これまで同様の機能では『bxSlider』などを使う事が多く重宝していたのですが一部の環境挙動が不安定だったり、カスタマイズの手間がかかる事もありました。その点、後発の『slick』はさまざまな環境での利用を想定して作られていて、今のところ設置利用で手間取った事はありません。
ライセンスはMITライセンス、費用に関しては商用利用でも無料ですので、その点も安心です。特殊な環境やアニメーションなどのカスタマイズもオプションの組み合わせで可能なSlickは現時点でもっともバランスのいいSliderだと思います。
設置に関してや、オプション関連でご不明な点があれば、当ページコメント欄でもお気軽にご相談ください。
|
Get SMS delivery status with Python
Get SMS delivery status with Python
Create group with Python
Get group field list with Python
Add a field to a group with Python
Delete a field from a group with Python
Delete a contact from a group
Assing country to e group with Python
Get group contact list with Python
Add contact to a group with Python
Modify contact of a group with Python
Delete contact of a group with Python
Get SMS delivery status with Python
import urllib2
afilnet_class="sms"
afilnet_method="getdeliverystatus"
afilnet_user="user"
afilnet_password="password"
afilnet_messages="123456,123457,123458"
afilnet_output=""
# Create an URL request
sUrl = "https://www.afilnet.com/api/http/?class="+afilnet_class+"&method="+afilnet_method+"&user="+afilnet_user+"&password="+afilnet_password+"&messages="+afilnet_messages+"&output="+afilnet_output
result = urllib2.urlopen(sUrl).read()
from urllib.request import urlopen
from urllib.parse import urlencode
afilnet_class="sms"
afilnet_method="getdeliverystatus"
afilnet_user="user"
afilnet_password="password"
afilnet_messages="123456,123457,123458"
afilnet_output=""
# Create an URL request
sUrl = "https://www.afilnet.com/api/http/"
data = urlencode({"class": afilnet_class,"method": afilnet_method,"user": afilnet_user,"password": afilnet_password,"messages": afilnet_messages,"output": afilnet_output}).encode("utf-8")
result = urlopen(sUrl, data).read()
print(result)
import requests
afilnet_class="sms"
afilnet_method="getdeliverystatus"
afilnet_user="user"
afilnet_password="password"
afilnet_messages="123456,123457,123458"
afilnet_output=""
# Create an URL request
sUrl = "https://www.afilnet.com/api/basic/?class="+afilnet_class+"&method="+afilnet_method+"&messages="+afilnet_messages+"&output="+afilnet_output
result = requests.get(sUrl,auth=requests.auth.HTTPBasicAuth(afilnet_user,afilnet_password))
print(result.text)
Parameter Description Compulsory / Optional
class=sms Class requested: Class to which the request is made Compulsory
method=getdeliverystatus Class method requested: Method of the class to which the request is made Compulsory
user User and e-mail of your Afilnet account Compulsory
password Password of your Afilnet account Compulsory
messages List of dispatch identifiers separated by commas (,) Compulsory
output Output format of the result Optional
When you make requests you will receive the following fields:
status
result (if status=success), here you will receive the following values:
messageid
sms
deliverydate
deliverystatus
error (if status=error), here you will receive the error code
The possible error codes are listed below
Code Description
MISSING_USER User or email not included
MISSING_PASSWORD Password not included
MISSING_CLASS Class not included
MISSING_METHOD Method not included
MISSING_COMPULSORY_PARAM Compulsory parameter not included
INCORRECT_USER_PASSWORD Incorrect user or password
INCORRECT_CLASS Incorrect class
INCORRECT_METHOD Incorrect method
Parameters:
class: sms
method: getdeliverystatus
user: user
password: password
messages: 123456,123457,123458
output:
Request:
https://www.afilnet.com/api/http/?class=sms&method=getdeliverystatus&user=user&password=password&messages=123456,123457,123458&output=
|
HLSを使ったライブストリーミングを試してみます
あらすじ
前々回はPythonからWin32APIをバシバシ叩いてきりたん好きなコトを喋らせることができるようになったのでした。
前回はAzureのWindowsServerにHTTPリクエストを送ってきりたん好きなコトを喋らせるサーバができたのでした。
今回は、HTTP Live Streaming(HLS)を用いてきりたんボイスをライブ配信してみようと思います!
HTTP Live Streaming
HTTP Live Streamingとは、Appleが開発したHTTPベースのストリーミング配信プロトコルです。 静的な動画ファイルのストリーミング配信はもちろん、ライブ配信(生放送)もできたり、 アダプティブストリーミングと呼ばれる回線速度に応じて配信するビットレートを変更する技術も利用可能です。
最近話題のAbemaTVなんかでも、HLSで配信を行っています。 ちなみに、Twitterにアップされた動画もHLSで配信されています。
ストリーミング配信プロトコルと聞くと、複雑そうな気がしてきますが、HLSはHTTPベースで非常に単純です。 ザックリと説明を書いてみます。
HLSのしくみ
HLSでの配信は、.tsファイルと.m3u8ファイルによって行われます。
ts
.tsファイルは、MPEG-2 TSと呼ばれる形式で、配信される映像・音声そのものが格納されます。
配信されるデータは一定の秒数ごとに分割し、このMPEG-2 TS形式で保存しておきます。 分割された.tsファイルは、HTTPでダウンロードできるようにしておきます。
ちなみに、日本のデジタルテレビ放送もこのMPEG-2 TSで配信されています。
m3u8
.m3u8ファイルは、配信ファイルのインデックスです。 先述した.tsに分割された映像・音声データのURLが列記されています。
AbemaTVから配信されている.m3u8の例
#EXTM3U
#EXT-X-STREAM-INF:PROGRAM-ID=1,BANDWIDTH=300000
240/playlist.m3u8?t=3i87VhR5nuXMsjxJRGBiEYSNPdfggGQtr9LjXNx1fr5Dufac7cEaEKMyo2UAv77B63hAvVewach5eaPjFGK3EU22fcpcFD4RAeNAE7nisDwZguUqvp&mq=720&lanceId=c99528aa-0c3c-4987-ab6c-ce5cd1430223
#EXT-X-STREAM-INF:PROGRAM-ID=1,BANDWIDTH=900000
360/playlist.m3u8?t=3i87VhR5nuXMsjxJRGBiEYSNPdfggGQtr9LjXNx1fr5Dufac7cEaEKMyo2UAv77B63hAvVewach5eaPjFGK3EU22fcpcFD4RAeNAE7nisDwZguUqvp&mq=720&lanceId=c99528aa-0c3c-4987-ab6c-ce5cd1430223
#EXT-X-STREAM-INF:PROGRAM-ID=1,BANDWIDTH=1400000
480/playlist.m3u8?t=3i87VhR5nuXMsjxJRGBiEYSNPdfggGQtr9LjXNx1fr5Dufac7cEaEKMyo2UAv77B63hAvVewach5eaPjFGK3EU22fcpcFD4RAeNAE7nisDwZguUqvp&mq=720&lanceId=c99528aa-0c3c-4987-ab6c-ce5cd1430223
#EXT-X-STREAM-INF:PROGRAM-ID=1,BANDWIDTH=2200000
720/playlist.m3u8?t=3i87VhR5nuXMsjxJRGBiEYSNPdfggGQtr9LjXNx1fr5Dufac7cEaEKMyo2UAv77B63hAvVewach5eaPjFGK3EU22fcpcFD4RAeNAE7nisDwZguUqvp&mq=720&lanceId=c99528aa-0c3c-4987-ab6c-ce5cd1430223
これはMaster Playlistと呼ばれるデータで、 回線速度によって異なるビットレートでの配信を行うアダプティブストリーミングのためのファイルです。 次に示すMedia PlaylistのURLと想定する回線速度が列記されています。
AbemaTVから配信されている.m3u8の例
#EXTM3U
#EXT-X-VERSION:3
#EXT-X-TARGETDURATION:6
#EXT-X-MEDIA-SEQUENCE:4
#EXT-X-DISCONTINUITY-SEQUENCE:1
#EXT-X-KEY:METHOD=AES-128,URI="abematv://v2/abema-news/abema-news/DUjoiyL1pJGkADZotyiXDn5",IV=0xaccca4b41de3d9afb029070eb564be40
#EXTINF:5.005000,
https://abematv.akamaized.net/tsnews/abema-news/h264/720/5BPWe1D8Hu9yCC8HaA3oHS.ts
#EXTINF:5.005000,
https://abematv.akamaized.net/tsnews/abema-news/h264/720/5SphyMY1TTLvYkFo7B5JuM.ts
#EXTINF:5.005000,
https://abematv.akamaized.net/tsnews/abema-news/h264/720/2kxyGFo9sH9zUUfKj5USUk.ts
#EXTINF:5.005000,
https://abematv.akamaized.net/tsnews/abema-news/h264/720/Cz43TVWLgUgqskzvWBBnjA.ts
これはMedia Playlistと呼ばれるデータで、 配信されている映像・音声が格納された.tsファイルのURLが列記されています。
再生の方法
クライアントは、まず.m3u8ファイルを取得します。 それがMaster Playlistであれば、回線速度によって適切な.m3u8を読みに行きます。 それがMedia Playlistであれば、.tsファイルを取得して再生します。
クライアントは、.m3u8内のタグと呼ばれるデータ(#EXTで始まる行)に従って、.m3u8を再読込します。 ライブ配信を行う場合は、クライアントが再読込した際に新しい配信データが追加されていれば良いわけです。
以下に、主要なタグの説明を示します。
EXT-X-TARGETDURATION
分割された.tsの中で最大の長さに最も近い整数値を指定します。 クライアントは、およそこの秒数ごとに.m3u8を再読込します。
EXT-X-MEDIA-SEQUENCE
その.m3u8にかかれている一番最初の.tsが、放送全体で何番目の.tsであるかの値を指定します。 クライアントが分割された.tsを正しく連続再生する上で必要になります。
EXTINF
分割された.ts1つの秒数。小数で指定できる。
HLSを再生したい
HLSはブラウザ上で再生できるのが強いです。 https://caniuse.com/#search=HLS
ん?????なんか赤いな……
FirefoxとChromeが対応してないやんけ!!!!!!!!! 珍しくEdgeが優秀だ……
悲しいですね。 でもMesia Source Extensions(MSE)という機能を使うとそれっぽくHLSを再生できるので安心です。 https://caniuse.com/#search=MSE
ちなみに、AbemaTVはTHEOplayerという有償のプレーヤーを使ってるみたい。
HLSで生配信
HLSをなんとな~くわかった気になったので、ライブ配信をやってみます。
HLSで生配信をするにはどうすればよいのかというと、つまり
データをMPEG-2 TSにエンコードする
.m3u8に.tsへのリンクを追加する
を繰り返すだけです。
.tsへのを追加していくだけだとドンドン.m3u8がでっかくなってしまうので、 過去の.tsへのリンクはある程度時間が立ったら消してしまいましょう。 .tsへのリンクを消したら、#EXT-X-MEDIA-SEQUENCEを増やさないとクライアントが困ってしまうので注意です。
とっても単純ですね! さて、先述したことをやるだけでライブ配信サーバが書けてしまいます。
今回は、Twitterからタイムラインを取得して、ツイートをいい感じにきりたんに読んでもらい、 HLSを用いてリアルタイムでその音声データを配信してみます。
音声ファイルを分割してMPEG-2 TSにするのを自分で書くのは流石にしんどいので、 FFMPEGさんにお願いしました。 https://www.ffmpeg.org/ffmpeg-formats.html#hls-1
やること
twitter.listen()
UserStreamでツイート取得
kiritan.pyにジョブを投げる
encoder.pyのキューに読み上げたWAVファイルを蓄積
encoder.livestreaming()
キューにファイルがなければ無音データをプレイリストに追加
キューにファイルがあればTSに分割してプレイリストに追加
プレイリストの先頭のTSの再生時間分だけ待って、プレイリストから削除
やりました
方針が定まったら書くだけ……
コード
全コード
HLS関係の処理はたったコレだけです!
# FFMPEGでファイルをMPEG-TSにエンコード(中身はMP3)
def ts(file):
logging.info("Encoding WAV to MPEG-TS")
data = subprocess.run(
[
"ffmpeg",
"-i", file, "-vn",
"-acodec", "libmp3lame",
"-ab", "128k",
"-ac", "2",
"-ar", "44100",
"-f", "hls",
"-hls_time", "2",
"-hls_list_size", "0",
"-start_number", str(int(time.time() * 1000)),
"-hls_segment_filename", "static/live%d.ts",
"pipe:1.m3u8"
],
stdout=subprocess.PIPE,
stderr=subprocess.DEVNULL
)
# 出力されたプレイリストをパースして返す
playlist = data.stdout.decode("utf-8")
playlist = playlist[playlist.rfind("#EXTM3U"):]
# Tuple (再生時間, ファイルパス)
return re.findall(r"#EXTINF:([\d.]+),\s+(\S+)", playlist)
# ライブストリーミングキューに追加
que = []
def enqueue(f):
que.append(f)
# ライブプレイリストを更新
tsl = []
seq = 0
def __livecasting():
global seq
while True:
try:
if len(que) != 0:
# キューにデータがあればプレイリストに追加
tsl.extend(ts(que.pop(0)))
else:
# キューが空なら無音ファイルを配信
while len(tsl) < 3:
tsl.append(("2.04", "silent.ts"))
# TS 1つ分だけ休憩する
time.sleep(float(tsl[0][0]))
tsl.pop(0)
seq += 1
except:
logging.error(traceback.format_exc())
# サーバ起動
def livecasting():
# 古い配信データを削除
for f in glob.glob("static/live*"):
os.remove(f)
threading.Thread(target=__livecasting).start()
# ライブプレイリストを生成
def playlist():
pl = [
"#EXTM3U",
"#EXT-X-VERSION:3",
"#EXT-X-TARGETDURATION:3",
"#EXT-X-MEDIA-SEQUENCE:%d" % seq
]
for ts in tsl[:5]:
pl.append("#EXTINF:%s," % ts[0])
pl.append("#EXT-X-DISCONTINUITY")
pl.append("/static/%s" % ts[1])
return "\n".join(pl)
ffmpegを使っているので、別途用意が必要です。 必要なPythonのライブラリはpypiwin32とflaskとtweepyです
pip install pypiwin32 flask tweepy
動作検証
大体のブラウザでhls.jsを介した再生ができました。
ネイティブでHLSに対応しているブラウザ(Safari, Edge, iOS Safari, Android Chrome)は、 .m3u8に直接アクセスしても再生できました。
なんかAndroidだとちょっとプツプツしちゃってるかも???
ハマりそうなポイント
TS1つの長さ、プレイリスト全体の長さ、#EXT-X-TARGETDURATIONをうまく調整しないと再生されなかったりプツプツなったりする
このへんどうするのが最適なのかがわからないので今回は試行錯誤した
TSが切り替わる(別のメディアから生成したものになる)時に#EXT-X-DISCONTINUITYを付けないと再生が止まる
Appleのソフトウェアはうまくやってくれるけど、その他は上手く行かない
TwitterのUserStreamはPCの時計かズレてると認証失敗する
おしまい
ということで、AzureのWindowsServerでWin32APIを使ってVOICEROIDを操作してTwitterのTLを読み上げた音声をHLSでライブ配信できました!
Win32APIとかHLSとか、まだわからないことがたくさんなので、それはおかしいだろ!って思ったら鉞おねがいします><
それにしても、きりたんはかわいいですね!
おしまい
|
In part 2 of our series on MLflow blogs, we demonstrated how to use MLflow to track experiment results for a Keras network model using binary classification. We classified reviews from an IMDB dataset as positive or negative. And we created one baseline model and two experiments. For each model, we tracked its respective training accuracy and loss and validation accuracy and loss.
In this third part in our series, we’ll show how you can save your model, reproduce results, load a saved model, predict unseen reviews—all easily with MLFlow—and view results in TensorBoard.
Saving Models in MLFlow
MLflow logging APIs allow you to save models in two ways. First, you can save a model on a local file system or on a cloud storage such as S3 or Azure Blob Storage; second, you can log a model along with its parameters and metrics. Both preserve the Keras HDF5 format, as noted in MLflow Keras documentation.
First, if you save the model using MLflow Keras model API to a store or filesystem, other ML developers not using MLflow can access your saved models using the generic Keras Model APIs. For example, within your MLflow runs, you can save a Keras model as shown in this sample snippet:
import mlflow.keras
#your Keras built, trained, and tested model
model = ...
#local or remote S3 or Azure Blob path
model_dir_path=...
# save the mode to local or remote accessible path on the S3 or Azure Blob
mlflow.keras.save_model(model, model_dir_path)
Once saved, ML developers outside MLflow can simply use the Keras APIs to load the model and predict it. For example,
import keras
from keras.models import load_model
model_dir_path = ...
new_data = ...
model = load_model(model_dir_path)
predictions = model.predict(new_data)
Second, you can save the model as part of your run experiments, along with other metrics and artifacts as shown in the code snippet below:
import mlflow
import mlfow.keras
#your Keras built, trained, and tested model
model = ...
with mlflow.start_run():
# log metrics
mlflow.log_metric("binary_loss", binary_loss)
mlflow.log_metric("binary_acc", binary_acc)
mlflow.log_metric("validation_loss", validation_loss)
mlflow.log_metric("validation_acc", validation_acc)
mlflow.log_metric("average_loss", average_loss)
mlflow.log_metric("average_acc", average_acc)
# log artifacts
mlflow.log_artifacts(image_dir, "images")
# log model
mlflow.keras.log_model(model, "models")
With this second approach, you can access its run_uuid or location from the MLflow UI runs as part of its saved artifacts:
In our IMDB example, you can view code for both modes of saving in train_nn.py, class KTrain(). Saving model in this way provides access to reproduce the results from within MLflow platform or reload the model for further predictions, as we’ll show in the sections below.
Reproducing Results from Saved Models
As part of machine development life cycle, reproducibility of any model experiment by ML team members is imperative. Often you will want to either retrain or reproduce a run from several past experiments to review respective results for sanity, audibility or curiosity.
One way, in our example, is to manually copy logged hyper-parameters from the MLflow UI for a particular run_uuid and rerun using main_nn.py or
with the original parameters as arguments, as explained in the README.md.reload_nn.py
Either way, you can reproduce your old runs and experiments:
python reproduce_run_nn.py --run_uuid=5374ba7655ad44e1bc50729862b25419
python reproduce_run_nn.py --run_uuid=5374ba7655ad44e1bc50729862b25419 [--tracking_server=URI]
Or use mlflow run command:
mlflow run keras/imdbclassifier -e reproduce -P run_uuid=5374ba7655ad44e1bc50729862b25419
mlflow run keras/imdbclassifier -e reproduce -P run_uuid=5374ba7655ad44e1bc50729862b25419 [-P tracking_server=URI]
By default, the tracking_server defaults to the local mlruns directory. Here is an animated sample output from a reproducible run:
Fig 2. Run showing reproducibility from a previous run_uuid
Loading and Making Predictions with Saved Models
In the previous sections, when executing your test runs, the models used for these test runs also saved via the mlflow.keras.log_model(model, "models"). Your Keras model is saved in HDF5 file format as noted in MLflow > Models > Keras. Once you have found a model that you like, you can re-use your model using MLflow as well.
This model can be loaded back as a Python Function as noted noted in mlflow.keras using mlflow.keras.load_model(path, run_id=None).
To execute this, you can load the model you had saved within MLflow by going to the MLflow UI, selecting your run, and copying the path of the stored model as noted in the screenshot below.
With your model identified, you can type in your own review by loading your model and executing it. For example, let’s use a review that is not included in the IMDB Classifier dataset:
this is a wonderful film with a great acting, beautiful cinematography, and amazing direction
To run a prediction against this review, use the predict_nn.py against your model:
python predict_nn.py --load_model_path='/Users/dennylee/github/jsd-mlflow-examples/keras/imdbclassifier/mlruns/0/55d11810dd3b445dbad501fa01c323d5/artifacts/models' --my_review='this is a wonderful film with a great acting, beautiful cinematography, and amazing direction'
Or you can run it directly using mlflow and the imdbclassifer repo package:
mlflow run keras/imdbclassifier -e predict -P load_model_path='/Users/jules/jsd-mlflow-examples/keras/imdbclassifier/keras_models/178f1d25c4614b34a50fbf025ad6f18a' -P my_review='this is a wonderful film with a great acting, beautiful cinematography, and amazing direction'
The output for this command should be similar to the following output predicting a positive sentiment for the provided review.
Using TensorFlow backend.
load model path: /tmp/models
my review: this is a wonderful film with a great acting, beautiful cinematography, and amazing direction
verbose: False
Loading Model...
Predictions Results:
[[ 0.69213998]]
Examining Results with TensorBoard
In addition to reviewing your results in the MLflow UI, the code samples save TensorFlow events so that you can visualize the TensorFlow session graph. For example, after executing the statement python main_nn.py, you will see something similar to the following output:
Average Probability Results:
[0.30386349968910215, 0.88336000000000003]
Predictions Results:
[[ 0.35428655]
[ 0.99231517]
[ 0.86375767]
...,
[ 0.15689197]
[ 0.24901576]
[ 0.4418138 ]]
Writing TensorFlow events locally to /var/folders/0q/c_zjyddd4hn5j9jkv0jsjvl00000gp/T/tmp7af2qzw4
Uploading TensorFlow events as a run artifact.
loss function use binary_crossentropy
This model took 51.23427104949951 seconds to train and test.
You can extract the TensorBoard log directory with the output line stating Writing TensorFlow events locally to .... And to start TensorBoard, you can run the following command:
tensorboard --logdir=/var/folders/0q/c_zjyddd4hn5j9jkv0jsjvl00000gp/T/tmp7af2qzw4
Within the TensorBoard UI:
Click on Scalarsto review the same metrics recorded within MLflow: binary loss, binary accuracy, validation loss, and validation accuracy.
Click on Graphto visualize and interact with your session graph
Closing Thoughts
In this blog post, we demonstrated how to use MLflow to save models and reproduce results from saved models as part of the machine development life cycle. In addition, through both python and mlflow command line, we loaded a saved model and predicted the sentiment of our own custom review unseen by the model. Finally, we showcased how you can utilize MLflow and TensorBoard side-by-side by providing code samples that generate TensorFlow events so you can visualize the metrics as well as the session graph.
What’s Next?
You have seen, in three parts, various aspects of MLflow: from experimentation to reproducibility and using MLlfow UI and TensorBoard for visualization of your runs.
Read More
Here are some resources for you to learn more:
Read MLflow Docs
Find out How to Use Keras, TensorFlow, and MLflow with PyCharm
Learn How to Use MLflow to Experiment a Keras Network Model: Binary Classification for Movie Reviews
Learn from Introducing mlflow-apps: A Repository of Sample Applications for MLflow
View MLflow Meetup Presentations and Slides
Get Github sources for this blog example
Find out New Features in MLflow Release v0.6.0
|
When I try to open FC, it shows the loading screen and then crashes.
If I run it through the terminal I get the following:
Code: Select all
Program received signal SIGSEGV, Segmentation fault.
#0 /lib/x86_64-linux-gnu/libc.so.6(+0x3ef20) [0x7f3060b73f20]
#1 /lib/x86_64-linux-gnu/libc.so.6(+0x18ad6a) [0x7f3060cbfd6a]
#2 0x7f301e6573db in QMetaType::registerNormalizedType(QByteArray const&, void (*)(void*), void* (*)(void*, void const*), int, QFlags<QMetaType::TypeFlag>, QMetaObject const*) from /usr/lib/x86_64-linux-gnu/libQt5Core.so.5+0xbb
#3 /usr/lib/python2.7/dist-packages/PySide2/QtCore.x86_64-linux-gnu.so(+0x21b98e) [0x7f301f1b598e]
#4 /usr/lib/python2.7/dist-packages/PySide2/QtCore.x86_64-linux-gnu.so(initQtCore+0x5e) [0x7f301f23f5ae]
#5 /usr/lib/x86_64-linux-gnu/libpython2.7.so.1.0(_PyImport_LoadDynamicModule+0x9b) [0x7f3062803cab]
#6 /usr/lib/x86_64-linux-gnu/libpython2.7.so.1.0(+0x1d4afe) [0x7f3062887afe]
#7 /usr/lib/x86_64-linux-gnu/libpython2.7.so.1.0(+0x151e91) [0x7f3062804e91]
#8 /usr/lib/x86_64-linux-gnu/libpython2.7.so.1.0(+0x152176) [0x7f3062805176]
#9 /usr/lib/x86_64-linux-gnu/libpython2.7.so.1.0(PyImport_ImportModuleLevel+0x2f5) [0x7f3062805565]
#10 /usr/lib/x86_64-linux-gnu/libpython2.7.so.1.0(+0xb4de4) [0x7f3062767de4]
#11 /usr/lib/x86_64-linux-gnu/libpython2.7.so.1.0(PyObject_Call+0x43) [0x7f3062707333]
#12 /usr/lib/x86_64-linux-gnu/libpython2.7.so.1.0(PyEval_CallObjectWithKeywords+0x47) [0x7f30628917a7]
#13 /usr/lib/x86_64-linux-gnu/libpython2.7.so.1.0(PyEval_EvalFrameEx+0x3909) [0x7f306275dac9]
#14 /usr/lib/x86_64-linux-gnu/libpython2.7.so.1.0(PyEval_EvalCodeEx+0x7d8) [0x7f3062892278]
#15 /usr/lib/x86_64-linux-gnu/libpython2.7.so.1.0(PyEval_EvalCode+0x19) [0x7f306275a029]
#16 /usr/lib/x86_64-linux-gnu/libpython2.7.so.1.0(PyImport_ExecCodeModuleEx+0xac) [0x7f30628821cc]
#17 /usr/lib/x86_64-linux-gnu/libpython2.7.so.1.0(+0x1d4462) [0x7f3062887462]
#18 /usr/lib/x86_64-linux-gnu/libpython2.7.so.1.0(+0x1d4a3e) [0x7f3062887a3e]
#19 /usr/lib/x86_64-linux-gnu/libpython2.7.so.1.0(+0x151e91) [0x7f3062804e91]
#20 /usr/lib/x86_64-linux-gnu/libpython2.7.so.1.0(PyImport_ImportModuleLevel+0x122) [0x7f3062805392]
#21 /usr/lib/x86_64-linux-gnu/libpython2.7.so.1.0(+0xb4de4) [0x7f3062767de4]
#22 /usr/lib/x86_64-linux-gnu/libpython2.7.so.1.0(PyObject_Call+0x43) [0x7f3062707333]
#23 /usr/lib/x86_64-linux-gnu/libpython2.7.so.1.0(PyEval_CallObjectWithKeywords+0x47) [0x7f30628917a7]
#24 /usr/lib/x86_64-linux-gnu/libpython2.7.so.1.0(PyEval_EvalFrameEx+0x3909) [0x7f306275dac9]
#25 /usr/lib/x86_64-linux-gnu/libpython2.7.so.1.0(PyEval_EvalCodeEx+0x7d8) [0x7f3062892278]
#26 /usr/lib/x86_64-linux-gnu/libpython2.7.so.1.0(PyEval_EvalCode+0x19) [0x7f306275a029]
#27 /usr/lib/x86_64-linux-gnu/libpython2.7.so.1.0(PyRun_StringFlags+0x76) [0x7f30627fd546]
#28 /usr/lib/x86_64-linux-gnu/libpython2.7.so.1.0(PyEval_EvalFrameEx+0x61c4) [0x7f3062760384]
#29 /usr/lib/x86_64-linux-gnu/libpython2.7.so.1.0(PyEval_EvalCodeEx+0x7d8) [0x7f3062892278]
#30 /usr/lib/x86_64-linux-gnu/libpython2.7.so.1.0(PyEval_EvalFrameEx+0x5bf6) [0x7f306275fdb6]
#31 /usr/lib/x86_64-linux-gnu/libpython2.7.so.1.0(PyEval_EvalCodeEx+0x7d8) [0x7f3062892278]
#32 /usr/lib/x86_64-linux-gnu/libpython2.7.so.1.0(PyEval_EvalCode+0x19) [0x7f306275a029]
#33 /usr/lib/x86_64-linux-gnu/libpython2.7.so.1.0(PyRun_StringFlags+0x76) [0x7f30627fd546]
#34 0x7f3062cdcb46 in Base::InterpreterSingleton::runString[abi:cxx11](char const*) from /usr/lib/freecad/lib/libFreeCADBase.so+0x66
#35 0x7f306377276a in Gui::Application::runApplication() from /usr/lib/freecad/lib/libFreeCADGui.so+0xdfa
#36 freecad(main+0x6db) [0x556b103b34db]
#37 /lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xe7) [0x7f3060b56b97]
#38 freecad(_start+0x2a) [0x556b103b468a]
Code: Select all
Program received signal SIGSEGV, Segmentation fault.
#0 /lib/x86_64-linux-gnu/libc.so.6(+0x3ef20) [0x7fc2df891f20]
#1 /lib/x86_64-linux-gnu/libc.so.6(+0x18ad6a) [0x7fc2df9ddd6a]
#2 0x7fc29d34f3db in QMetaType::registerNormalizedType(QByteArray const&, void (*)(void*), void* (*)(void*, void const*), int, QFlags<QMetaType::TypeFlag>, QMetaObject const*) from /usr/lib/x86_64-linux-gnu/libQt5Core.so.5+0xbb
#3 /usr/lib/python2.7/dist-packages/PySide2/QtCore.x86_64-linux-gnu.so(+0x21b98e) [0x7fc29dead98e]
#4 /usr/lib/python2.7/dist-packages/PySide2/QtCore.x86_64-linux-gnu.so(initQtCore+0x5e) [0x7fc29df375ae]
#5 /usr/lib/x86_64-linux-gnu/libpython2.7.so.1.0(_PyImport_LoadDynamicModule+0x9b) [0x7fc2e1521cab]
#6 /usr/lib/x86_64-linux-gnu/libpython2.7.so.1.0(+0x1d4afe) [0x7fc2e15a5afe]
#7 /usr/lib/x86_64-linux-gnu/libpython2.7.so.1.0(+0x151e91) [0x7fc2e1522e91]
#8 /usr/lib/x86_64-linux-gnu/libpython2.7.so.1.0(+0x152176) [0x7fc2e1523176]
#9 /usr/lib/x86_64-linux-gnu/libpython2.7.so.1.0(PyImport_ImportModuleLevel+0x2f5) [0x7fc2e1523565]
#10 /usr/lib/x86_64-linux-gnu/libpython2.7.so.1.0(+0xb4de4) [0x7fc2e1485de4]
#11 /usr/lib/x86_64-linux-gnu/libpython2.7.so.1.0(PyObject_Call+0x43) [0x7fc2e1425333]
#12 /usr/lib/x86_64-linux-gnu/libpython2.7.so.1.0(PyEval_CallObjectWithKeywords+0x47) [0x7fc2e15af7a7]
#13 /usr/lib/x86_64-linux-gnu/libpython2.7.so.1.0(PyEval_EvalFrameEx+0x3909) [0x7fc2e147bac9]
#14 /usr/lib/x86_64-linux-gnu/libpython2.7.so.1.0(PyEval_EvalCodeEx+0x7d8) [0x7fc2e15b0278]
#15 /usr/lib/x86_64-linux-gnu/libpython2.7.so.1.0(PyEval_EvalCode+0x19) [0x7fc2e1478029]
#16 /usr/lib/x86_64-linux-gnu/libpython2.7.so.1.0(PyImport_ExecCodeModuleEx+0xac) [0x7fc2e15a01cc]
#17 /usr/lib/x86_64-linux-gnu/libpython2.7.so.1.0(+0x1d4462) [0x7fc2e15a5462]
#18 /usr/lib/x86_64-linux-gnu/libpython2.7.so.1.0(+0x1d4a3e) [0x7fc2e15a5a3e]
#19 /usr/lib/x86_64-linux-gnu/libpython2.7.so.1.0(+0x151e91) [0x7fc2e1522e91]
#20 /usr/lib/x86_64-linux-gnu/libpython2.7.so.1.0(PyImport_ImportModuleLevel+0x122) [0x7fc2e1523392]
#21 /usr/lib/x86_64-linux-gnu/libpython2.7.so.1.0(+0xb4de4) [0x7fc2e1485de4]
#22 /usr/lib/x86_64-linux-gnu/libpython2.7.so.1.0(PyObject_Call+0x43) [0x7fc2e1425333]
#23 /usr/lib/x86_64-linux-gnu/libpython2.7.so.1.0(PyEval_CallObjectWithKeywords+0x47) [0x7fc2e15af7a7]
#24 /usr/lib/x86_64-linux-gnu/libpython2.7.so.1.0(PyEval_EvalFrameEx+0x3909) [0x7fc2e147bac9]
#25 /usr/lib/x86_64-linux-gnu/libpython2.7.so.1.0(PyEval_EvalCodeEx+0x7d8) [0x7fc2e15b0278]
#26 /usr/lib/x86_64-linux-gnu/libpython2.7.so.1.0(PyEval_EvalCode+0x19) [0x7fc2e1478029]
#27 /usr/lib/x86_64-linux-gnu/libpython2.7.so.1.0(PyRun_StringFlags+0x76) [0x7fc2e151b546]
#28 /usr/lib/x86_64-linux-gnu/libpython2.7.so.1.0(PyEval_EvalFrameEx+0x61c4) [0x7fc2e147e384]
#29 /usr/lib/x86_64-linux-gnu/libpython2.7.so.1.0(PyEval_EvalCodeEx+0x7d8) [0x7fc2e15b0278]
#30 /usr/lib/x86_64-linux-gnu/libpython2.7.so.1.0(PyEval_EvalFrameEx+0x5bf6) [0x7fc2e147ddb6]
#31 /usr/lib/x86_64-linux-gnu/libpython2.7.so.1.0(PyEval_EvalCodeEx+0x7d8) [0x7fc2e15b0278]
#32 /usr/lib/x86_64-linux-gnu/libpython2.7.so.1.0(PyEval_EvalCode+0x19) [0x7fc2e1478029]
#33 /usr/lib/x86_64-linux-gnu/libpython2.7.so.1.0(PyRun_StringFlags+0x76) [0x7fc2e151b546]
#34 0x7fc2e19fc356 in Base::InterpreterSingleton::runString[abi:cxx11](char const*) from /usr/lib/freecad-daily/lib/libFreeCADBase.so+0x66
#35 0x7fc2e249915b in Gui::Application::runApplication() from /usr/lib/freecad-daily/lib/libFreeCADGui.so+0x114b
#36 freecad-daily(main+0x6db) [0x558812cb54fb]
#37 /lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xe7) [0x7fc2df874b97]
#38 freecad-daily(_start+0x2a) [0x558812cb66aa]
Same result.
Any ideas?
|
blob: 1f27c5192b237f365404c2ff73227cbfd18545d9 (
plain
)
import locale
locale.setlocale(locale.LC_NUMERIC, 'C')
import signal , time , sys , os, shutil
import pygtk
pygtk.require( '2.0' )
import gtk
import gobject
import time
import common.Config as Config
from common.Util.CSoundClient import new_csound_client
from common.Util.Profiler import TP
from SynthLab.SynthLabMain import SynthLabMain
from common.Util.Trackpad import Trackpad
from gettext import gettext as _
import commands
from sugar.activity import activity
class TamTamSynthLab(activity.Activity):
def __init__(self, handle):
activity.Activity.__init__(self, handle)
for snd in ['lab1','lab2','lab3','lab4', 'lab5', 'lab6']:
if not os.path.isfile(os.path.join(Config.DATA_DIR, snd)):
shutil.copyfile(Config.SOUNDS_DIR + '/' + snd , Config.DATA_DIR + '/' + snd)
os.system('chmod 0777 ' + Config.DATA_DIR + '/' + snd + ' &')
color = gtk.gdk.color_parse(Config.WS_BCK_COLOR)
self.modify_bg(gtk.STATE_NORMAL, color)
self.set_title('TamTam SynthLab')
self.set_resizable(False)
self.trackpad = Trackpad( self )
self.preloadTimeout = None
self.connect('notify::active', self.onActive)
self.connect('destroy', self.onDestroy)
#load the sugar toolbar
self.toolbox = activity.ActivityToolbox(self)
self.set_toolbox(self.toolbox)
self.activity_toolbar = self.toolbox.get_activity_toolbar()
self.activity_toolbar.share.hide()
self.activity_toolbar.keep.hide()
self.toolbox.show()
self.trackpad.setContext('synthLab')
self.synthLab = SynthLabMain(self)
self.connect('key-press-event', self.synthLab .onKeyPress)
self.connect('key-release-event', self.synthLab .onKeyRelease)
self.connect( "key-press-event", self.synthLab.onKeyPress )
self.connect( "key-release-event", self.synthLab.onKeyRelease )
self.set_canvas( self.synthLab )
self.synthLab.onActivate(arg = None)
self.show()
def onPreloadTimeout( self ):
if Config.DEBUG > 4: print "TamTam::onPreloadTimeout", self.preloadList
t = time.time()
if self.preloadList[0].load( t + 0.100 ): # finished preloading this object
self.preloadList.pop(0)
if not len(self.preloadList):
if Config.DEBUG > 1: print "TamTam::finished preloading", time.time() - t
self.preloadTimeout = False
return False # finished preloading everything
if Config.DEBUG > 4: print "TamTam::preload returned after", time.time() - t
return True
def onActive(self, widget = None, event = None):
if widget.props.active == False:
Config.logwrite(1, 'TamTamSynthLab.onActivate disconnecting csound')
csnd = new_csound_client()
csnd.connect(False)
else:
Config.logwrite(1, 'TamTamSynthLab.onActivate connecting csound')
csnd = new_csound_client()
csnd.connect(True)
def onKeyPress(self, widget, event):
pass
def onKeyRelease(self, widget, event):
pass
def onDestroy(self, arg2):
if Config.DEBUG: print 'DEBUG: TamTam::onDestroy()'
self.synthLab.onDestroy()
csnd = new_csound_client()
csnd.connect(False)
csnd.destroy()
gtk.main_quit()
# No more dir created by TamTam
def ensure_dir(self, dir, perms=0777, rw=os.R_OK|os.W_OK):
if not os.path.isdir( dir ):
try:
os.makedirs(dir, perms)
except OSError, e:
print 'ERROR: failed to make dir %s: %i (%s)\n' % (dir, e.errno, e.strerror)
if not os.access(dir, rw):
print 'ERROR: directory %s is missing required r/w access\n' % dir
def read_file(self,file_path):
self.synthLab.handleJournalLoad(file_path)
def write_file(self,file_path):
self.synthLab.handleJournalSave(file_path)
|
blob: 208b7f72bf8b5cadfe032ee1dc3414f2f1942e8b (
plain
)
import os, sys, string
import subprocess
num_replica = 2
num_stripe = 4
#Cachesize calculator
cache_size = "`echo $[ $(grep 'MemTotal' /proc/meminfo | sed 's/[^0-9]//g') / 5120 ]`"
class CreateVolfile:
def __init__ (self, server_dict, server, transport,
transports, options, server_array):
self.host_dict = server_dict
self.host = server
self.volume_name = options.volume_name
self.transport = transport
self.transports = transports
self.gfs_port = options.port
self.gfs_ib_port = options.port + 1
self.auth_parameters = options.auth_param
self.raid_type = options.raid_type
self.ib_devport = options.ib_dev
self.num_servers = len (self.host_dict.keys())
self.conf_dir = options.conf_dir
self.host_array = server_array
self.unused = options.unused
self.debug = options.debug
self.volume_size_server = options.size_server
self.volume_size_client = options.size_client
def create_mount_volfile (self):
raid_type = self.raid_type
if self.conf_dir:
mount_fd = file ("%s/%s-%s.vol" % (self.conf_dir,
str(self.volume_name),
str(self.transport)), "w")
else:
mount_fd = file ("%s-%s.vol" % (str(self.volume_name),
str(self.transport)), "w")
print "Generating client volfiles.. for transport '%s'" % (self.transport)
cmdline = string.join (sys.argv, ' ')
mount_fd.write ("## file auto generated by %s (mount.vol)\n" %
sys.argv[0])
mount_fd.write ("# Cmd line:\n")
mount_fd.write ("# $ %s\n\n" % cmdline)
if raid_type is not None:
# Used for later usage
mount_fd.write ("# RAID %d\n" % raid_type)
mount_fd.write ("# TRANSPORT-TYPE %s\n" % self.transport)
subvolumes = []
for host in self.host_dict.keys():
i = 1
for exports in self.host_dict[host]:
mount_fd.write ("volume %s-%s\n" % (host,i))
mount_fd.write (" type protocol/client\n")
mount_fd.write (" option transport-type %s\n" %
self.transport)
mount_fd.write (" option remote-host %s\n" % host)
if self.transport == 'ib-verbs':
mount_fd.write (" option transport.ib-verbs.port %d\n" %
self.ib_devport)
mount_fd.write (" option transport.remote-port %d\n" %
self.gfs_ib_port)
if self.transport == 'tcp':
mount_fd.write (" option transport.socket.nodelay on\n")
mount_fd.write (" option transport.remote-port %d\n" %
self.gfs_port)
mount_fd.write (" option remote-subvolume brick%s\n" %
i)
mount_fd.write ("end-volume\n\n")
i += 1
exportlist = {}
for entry in self.host_array:
node = entry.split(':')[0]
if not exportlist.has_key(node):
exportlist[node] = 1
else:
exportlist[node] += 1
subvolumes.append(str(node) + '-' + str(exportlist[node]))
# Stripe section.. if given
if raid_type is 0:
max_stripe_idx = len (subvolumes) / num_stripe
stripe_idx = 0
index = 0
while index < max_stripe_idx:
mount_fd.write ("volume stripe-%d\n" % index)
mount_fd.write (" type cluster/stripe\n")
if self.unused:
mount_fd.write ("# option block-size 128k\n")
mount_fd.write ("# option use-xattr no\n")
mount_fd.write (" subvolumes %s %s %s %s\n" %
(subvolumes[stripe_idx],
subvolumes[stripe_idx+1],
subvolumes[stripe_idx+2],
subvolumes[stripe_idx+3]))
mount_fd.write ("end-volume\n\n")
stripe_idx += 4
index +=1
# Replicate section
if raid_type is 1:
max_mirror_idx = len (subvolumes) / num_replica
mirror_idx = 0
index = 0
while index < max_mirror_idx:
mount_fd.write ("volume mirror-%d\n" % index)
mount_fd.write (" type cluster/replicate\n")
mount_fd.write (" subvolumes %s %s\n" %
(subvolumes[mirror_idx],
subvolumes[mirror_idx+1]))
mount_fd.write ("end-volume\n\n")
mirror_idx += 2
index += 1
# Distribute section
if raid_type is 0:
subvolumes = []
flag = 0
while flag < index:
subvolumes.append ("stripe-%d" % flag)
flag += 1
if raid_type is 1:
subvolumes = []
flag = 0
while flag < index:
subvolumes.append ("mirror-%d" % flag)
flag += 1
if len (subvolumes) > 1:
mount_fd.write ("volume distribute\n")
mount_fd.write (" type cluster/distribute\n")
if self.unused:
mount_fd.write("# option unhashed-sticky-bit yes # Used for migrating data while adding new nodes\n")
mount_fd.write("# option min-free-disk 5% # Minimum free disk available on the volume\n")
mount_fd.write (" subvolumes %s\n" %
string.join (subvolumes,' '))
mount_fd.write ("end-volume\n\n")
subvolumes[0] = "distribute"
if self.volume_size_client:
mount_fd.write ("volume quota\n")
mount_fd.write (" type features/quota\n")
mount_fd.write (" option disk-usage-limit %s\n" % self.volume_size_client)
if self.unused:
mount_fd.write ("# option minimum-free-disk-limit 10GB "
"# minimum free disk value (default) 0\n")
mount_fd.write ("# option refresh-interval 10\n")
mount_fd.write (" subvolumes %s\n" % subvolumes[0])
mount_fd.write ("end-volume\n\n")
mount_fd.write ("volume writebehind\n")
mount_fd.write (" type performance/write-behind\n")
mount_fd.write (" option cache-size 4MB\n")
if self.unused:
mount_fd.write ("# option enable-trickling-writes yes # Flush final write calls when network is free\n")
mount_fd.write ("# option enable-O_SYNC yes # Enable O_SYNC for write-behind\n")
mount_fd.write ("# option disable-for-first-nbytes 1 # Disable first nbytes with very small initial writes\n")
if self.volume_size_client:
mount_fd.write (" subvolumes quota\n")
else:
mount_fd.write (" subvolumes %s\n" % subvolumes[0])
mount_fd.write ("end-volume\n\n")
mount_fd.write ("volume readahead\n")
mount_fd.write (" type performance/read-ahead\n")
mount_fd.write (" option page-count 4\n")
if self.unused:
mount_fd.write ("# option force-atime-update yes # force updating atimes, default off\n")
mount_fd.write (" subvolumes writebehind\n")
mount_fd.write ("end-volume\n\n")
mount_fd.write ("volume iocache\n")
mount_fd.write (" type performance/io-cache\n")
mount_fd.write (" option cache-size %sMB\n" % cache_size)
mount_fd.write (" option cache-timeout 1\n")
if self.unused:
mount_fd.write ("# option priority *.html:1,abc*:2 # Priority list for iocaching files\n")
mount_fd.write (" subvolumes readahead\n")
mount_fd.write ("end-volume\n\n")
mount_fd.write ("volume quickread\n")
mount_fd.write (" type performance/quick-read\n")
mount_fd.write (" option cache-timeout 1\n")
mount_fd.write (" option max-file-size 64kB\n")
mount_fd.write (" subvolumes iocache\n")
mount_fd.write ("end-volume\n\n")
mount_fd.write ("volume statprefetch\n")
mount_fd.write (" type performance/stat-prefetch\n")
mount_fd.write (" subvolumes quickread\n")
mount_fd.write ("end-volume\n\n")
return
def create_export_volfile (self):
cmdline = string.join (sys.argv, ' ')
if self.conf_dir:
exp_fd = file ("%s/%s-export.vol" %
(self.conf_dir,
str(self.host + '-' + self.volume_name)),"w")
else:
exp_fd = file ("%s-export.vol" %
(str(self.host + '-' + self.volume_name)),"w")
print "Generating server volfiles.. for server '%s'" % (self.host)
exp_fd.write ("## file auto generated by %s (export.vol)\n" %
sys.argv[0])
exp_fd.write ("# Cmd line:\n")
exp_fd.write ("# $ %s\n\n" % cmdline)
total_bricks = []
i=1
for export in self.host_dict[self.host]:
exp_fd.write ("volume posix%d\n" % i)
exp_fd.write (" type storage/posix\n")
if self.unused:
exp_fd.write("# option o-direct enable # (default: disable) boolean type only\n")
exp_fd.write("# option export-statfs-size no # (default: yes) boolean type only\n")
exp_fd.write("# option mandate-attribute off # (default: on) boolean type only\n")
exp_fd.write("# option span-devices 8 # (default: 0) integer value\n")
exp_fd.write("# option background-unlink yes # (default: no) boolean type\n")
exp_fd.write (" option directory %s\n" % export)
exp_fd.write ("end-volume\n\n")
if self.volume_size_server:
exp_fd.write ("volume quota%d\n" % i)
exp_fd.write (" type features/quota\n")
exp_fd.write (" option disk-usage-limit %s\n" % self.volume_size_server)
if self.unused:
exp_fd.write ("# option minimum-free-disk-limit 10GB "
"# minimum free disk value (default) 0\n")
exp_fd.write ("# option refresh-interval 10\n")
exp_fd.write (" subvolumes posix%d\n" % i)
exp_fd.write ("end-volume\n\n")
exp_fd.write ("volume locks%d\n" % i)
exp_fd.write (" type features/locks\n")
if self.unused:
exp_fd.write ("# option mandatory on # Default off, used in specific applications\n")
if self.volume_size_server:
exp_fd.write (" subvolumes quota%d\n" % i)
else:
exp_fd.write (" subvolumes posix%d\n" % i)
exp_fd.write ("end-volume\n\n")
exp_fd.write ("volume brick%d\n" % i)
exp_fd.write (" type performance/io-threads\n")
exp_fd.write (" option thread-count 8\n")
if self.unused:
exp_fd.write ("# option autoscaling yes # Heuristic for autoscaling threads on demand\n")
exp_fd.write ("# option min-threads 2 # min count for thread pool\n")
exp_fd.write ("# option max-threads 64 # max count for thread pool\n")
exp_fd.write (" subvolumes locks%d\n" % i)
exp_fd.write ("end-volume\n\n")
total_bricks.append("brick%s" % i)
i += 1
for transport in self.transports:
exp_fd.write ("volume server-%s\n" % transport)
exp_fd.write (" type protocol/server\n")
exp_fd.write (" option transport-type %s\n" % transport)
for brick in total_bricks:
exp_fd.write (" option auth.addr.%s.allow %s\n" %
(brick, self.auth_parameters))
if transport == 'ib-verbs':
exp_fd.write (" option transport.ib-verbs.listen-port %d\n" % self.gfs_ib_port)
exp_fd.write (" option transport.ib-verbs.port %d\n" %
self.ib_devport)
if transport == 'tcp':
exp_fd.write (" option transport.socket.listen-port %d\n" % self.gfs_port)
exp_fd.write (" option transport.socket.nodelay on\n")
exp_fd.write (" subvolumes %s\n" %
string.join(total_bricks, ' '))
exp_fd.write ("end-volume\n\n")
return
|
蜈医↓譁ュ縺」縺ヲ縺翫¥縲�
蛹サ逋よュ蝣ア縺」縺ス縺�縺薙→繧呈嶌縺�縺溘′縲∬�ェ蛻�縺ッ蛹サ逋ょセ謎コ玖��縺ァ繧ゅ↑繧薙〒繧ゅ↑縺�縲�
蝌倥r譖ク縺�縺溘▽繧ゅj縺ッ縺ェ縺�縲ゅ@縺九@縲�髢馴&縺」縺ヲ縺�縺ヲ繧ゅ�∬イャ莉サ縺ッ謖√※縺ェ縺�縲�
閼ウ繝峨ャ繧ー縺ォ陦後▲縺ヲ縺阪◆縲や�樽RI縺ョ逕サ蜒上ョ繝シ繧ソ繧偵b繧峨▲縺溘�ゑシ�DICOM繝輔ぃ繧、繝ォ�シ俄�呈ョ句ソオ縺ェ縺後i縲〔-space縺ッ蜈・縺」縺ヲ縺ェ縺九▲縺溘��
縺励g縺�縺後↑縺�縺ョ縺ァ縲∫判蜒上ョ繝シ繧ソ繧剃コ梧ャ。蜈�繝輔�シ繝ェ繧ィ螟画鋤縺励※縺ソ縺溘��
MRI縺ィ縺ッ
MRI�シ�Magnetic Resonance Imaging: 逎∵ー怜�ア魑エ逕サ蜒擾シ峨�ョ縺薙→縲�
陬�鄂ョ縺ッ縲“oogle讀懃エ「縺ァ繝医ャ繝励□縺」縺溘�√%縺ョ繧オ繧、繝医�ョ逕サ蜒上�ョ縺セ縺輔↓縺薙l縲ゅ%繧後�ョ荳ュ縺ォ蜈・縺」縺ヲ縺阪◆縲�
MRI縺ョ邨先棡縺ィ繝輔�シ繝ェ繧ィ螟画鋤
MRI縺ォ繧医▲縺ヲ縲∬┻縺ィ縺九�∝ソ�閾薙�ョ譁ュ髱「逕サ蜒上′謇九↓蜈・繧九��
縺薙�ョ逕サ蜒上�ョ蜈�繝�繝シ繧ソ縺渓-space縺ィ縺九>縺�繝�繝シ繧ソ縺ァ縲�
縺薙l繧剃コ梧ャ。蜈�繝輔�シ繝ェ繧ィ螟画鋤縺吶k縺ィ縲∵妙髱「逕サ蜒上′縺ァ縺阪k縺ィ縺�縺�縺薙→縺ァ縲√%繧後′谺イ縺励°縺」縺溘��
�シ磯��莠梧ャ。蜈�繝輔�シ繝ェ繧ィ縺ァ縺ッ縺ェ縺�繧峨@縺�縲らゥコ髢灘捉豕「謨ー縺ョ繝�繝シ繧ソ繧偵�√ヵ繝シ繝ェ繧ィ螟画鋤縺吶k縺」縺ヲ菴輔°縺�縺、繧ゅ→驕輔≧縺ョ縺後>繧�縺�縺代←縲ゑシ�
閼ウ繝峨ャ繧ー縺ァ縺ッ縲`RI繧剃スソ縺�縺昴≧縺ェ縲�
縺�縺九i縲∫┌菫晞匱�シ�16荳�閾ェ閻ケ�シ峨〒縺。繧�縺」縺ィ鬮倥a縺ョ閼ウ繝峨ャ繧ー繧貞女縺代※縺ソ縺溘��
縺励°縺励��驕翫�ウ逶ョ逧�縺ァ縺ッ縺ェ縺�縲�
縺溘�カ繧典IA�シ�Transit Ischemic Attack�シ壻ク�驕取�ァ閼ウ陌夊。�逋コ菴懶シ峨▲縺ヲ繧�縺、縺ォ縺九°縺」縺溘°繧ゅ@繧後↑縺�縺ィ諤昴≧蜃コ譚・莠九′縺ゅ▲縺溘��
縺�縺九i縲∬┻縺ョ邊セ蟇�讀懈渊繧偵@縺溘°縺」縺溘�ゅ%縺」縺。縺梧悽蠖薙�ッ繝。繧、繝ウ縲�
蜈域律縲√ず繝�縺ォ陦後▲縺ヲ縺�縺溘→縺阪�ョ縺薙→縲�
騾ア�シ代〒縲√ヨ繝ャ繝�繝峨Α繝ォ�シ医Λ繝ウ繝九Φ繧ー繝槭す繝ウ�シ峨〒10km襍ー繧九�ョ縺檎ソ呈�」縺ォ縺ェ縺」縺ヲ縺�縺溘��10.5 km/h
謠帶ー励′謔ェ縺�縺ェ縺√→諤昴▲縺ヲ縺�縺溘��
縺昴@縺溘i豎励r縺�縺、繧ゆサ・荳翫↓縺九>縺ヲ縺励∪縺」縺溘i縺励¥縲∬┳豌エ逞�縺」縺ス縺�諢溘§縺ォ縺ェ縺」縺溘��
縺輔i縺ォ縺イ縺ゥ縺�縺薙→縺ォ縲∬オー繧顔オゅo縺」縺溷セ後�∵焔縺後@縺ウ繧後�√m繧後▽縺悟屓繧峨↑縺�迥カ諷九↓縺ェ縺」縺溘��
縺薙s縺ェ繧灘�昴a縺ヲ縲�
繝昴き繝ェ繧偵◆縺上&繧馴」イ繧薙□繧画イサ縺」縺溘��
蜈域怦縺ッ縲√%繧後↓縺、縺�縺ヲ譖ク縺薙≧縺ィ縺励※縺�縺溘′縲}ython縺ョ隧ア縺ィ縺ッ豈幄牡縺碁&縺�縲∬ゥア縺後∪縺ィ縺セ繧峨↑縺九▲縺溘◆繧√�∵兜遞ソ繧呈妙蠢オ縺励◆縲ゆク願ィ倥�ョ繧医≧縺ェ縺薙→繧呈嶌縺薙≧縺ィ縺励※縺�縺溘��
閼ウ縺梧が縺�縺ョ縺具シ�シ溘→諤昴▲縺ヲ諤悶¥縺ェ縺」縺溘��
#7119�シ�逾樊虻縺ァ謨第�・霆翫′蠢�隕√°蛻、譁ュ縺ァ縺阪↑縺�莠コ縺ョ縺溘a縺ョ髮サ隧ア逡ェ蜿キ�シ峨↓髮サ隧ア縺励◆繧峨�∫恚隴キ蟶ォ縺輔s縺ォ縺、縺ェ縺�縺ァ縺上l縺ヲ縲∽コ区ュ繧偵�∽ク願ィ倅サ・荳翫↓隧ア縺励※縲∽ス募�九°邁。譏薙ユ繧ケ繝育噪縺ェ縺ョ繧呈欠遉コ縺輔l縺ヲ陦後▲縺溘i縲ゅ�檎キ頑�・諤ァ縺ッ縺ェ縺�縺ァ縺吶�ュ縲ゅ¢縺ゥ縲∵掠縺�縺�縺。縺ォ蛹サ逋よゥ滄未縺ォ蜿苓ィコ縺輔l縺溘⊇縺�縺後>縺�縺ァ縺吶h縲阪→縺ョ縺薙→縲�
繧ゅ→繧ゅ→縲〔-space逕サ蜒上′谺イ縺励°縺」縺溘�ョ縺ァ縲√>繧上l縺ェ縺上※繧ょ女縺代k縺、繧ゅj縺ァ縺ッ縺ゅ▲縺溘��
縺励°縺励�√b縺ィ繧ゅ→閼ウ繝峨ャ繧ー繧剃コ育エ�縺励※縺�縺滓律譎ゑシ�2縺区怦縺上i縺�蜈茨シ峨r險�縺�縺ィ縲�
逵玖ュキ蟶ォ縺輔s縺後■繧�縺」縺ィ險よュ」縺吶k繧医≧縺ェ蜿」隱ソ縺ァ縲後▲縲√b縺�縺。繧�縺」縺ィ縲∵掠縺�縺サ縺�縺後�√>縺�縺ァ縺吶�ュ縲阪→險�繧上l縺ヲ縺励∪縺」縺溘��
縺ァ縲∫峩霑代�ョ蝨滓屆譌・縺ォ縲∬┻繝峨ャ繧ー繧貞女縺代◆縲�
縺昴�ョ邨先棡縺後��3騾ア髢灘セ鯉シ井サ奇シ峨↓螻翫>縺溘��
邨先棡縺ッ縲∝、ァ荳亥、ォ縺�縺」縺溘��
縺ゥ縺」縺九�∬。�邂。縺瑚ゥー縺セ縺」縺ヲ縺�繧九→縺九�ッ縺ェ縺九▲縺溘��
縺溘□縲√◎縺ョ縺サ縺句▼蠎キ髱「�シ医さ繝ャ繧ケ繝�繝ュ繝シ繝ォ縺ィ縺玖。�蝨ァ縺ィ縺具シ峨〒縲∵園隕九′縺ゅ▲縺溘�ョ縺ァ縲∫オ先棡縺ィ縺励※陦後▲縺ヲ繧医°縺」縺溘��
MRI縺励◆諢滓Φ
迢ュ縺�
陬�鄂ョ縺ョ髻ウ縺後☆縺斐°縺」縺溘��
繝倥ャ繝峨ヵ繧ゥ繝ウ陬�逹�竊帝浹讌ス縺ッ驕ク縺ケ縺溘��
菴薙�ッ諡俶據縺輔l縺溽憾諷九□縺」縺溘��
縲梧�ッ繧貞精縺」縺ヲ繝シ縲√�√�∵�ッ繧呈ュ「繧√※縲ゅ�阪→菴募屓縺区欠遉コ縺梧擂縺溘��
縲瑚。�邂。繧呈僑縺偵k縺偵k縺願脈縲阪↑繧九せ繝励Ξ繝シ繧定�後↓縺九¢繧峨l縺溘�や�貞ッ昴☆縺弱◆譎ゅ�ョ鬆ュ逞帙→蜷後§迥カ諷九↓縲�
繧ケ繝励Ξ繝シ�シ大聖縺阪〒縲∽ス薙↓縺ゅs縺ェ螟牙喧縺後≠繧九↑繧薙※縲√�悟現逋ゅ☆縺斐>縲阪▲縺ヲ縺ェ縺」縺溘��
DICOM繝輔ぃ繧、繝ォ
蛹サ逋よュ蝣ア縺ョ逕サ蜒上ョ繝シ繧ソ縺ョ隕乗�シ繧峨@縺�縲�
逕サ蜒上�ョ縺ソ縺ェ繧峨★縲�髻ウ螢ー繝�繝シ繧ソ遲峨b菫晏ュ倥〒縺阪k縺昴≧縺ェ縲�
縺昴�ョ莉悶�∵ュ蝣ア縺後◆縺上&繧楢ゥー縺セ縺」縺ヲ縺�繧九��
縲後%繧後↓縲〔-space縺鯉シ�シ溘�阪→諤昴>縲�
繝。繝「蟶ウ縺ァ髢九¥縺ィ縲√→縺薙m縺ゥ縺薙m譁�蟄怜喧縺代�ゅヰ繧、繝翫Μ繝輔ぃ繧、繝ォ縺ェ繧薙〒縺励g縺�縲ゅ″縺」縺ィ縲�
DICOM繝輔ぃ繧、繝ォ縺ッ縲√い繝�繝励Ο繝シ繝峨@縺ェ縺�縲�
縺ェ縺懊↑繧峨�ー縲�
閾ェ蛻�縺ョ蛟倶ココ諠�蝣ア
蜿苓ィコ縺励◆逞�髯「蜷� 謇�蝨ィ蝨ー
繧ェ繝壹Ξ繝シ繧ソ繝シ縺ョ豌丞錐
縺ェ縺ゥ縺梧嶌縺�縺ヲ縺ゅ▲縺溘°繧峨��
閾ェ蛻�縺ョ縺ッ諠�蝣ア縺ッ縺�縺�縺ィ縺励※縲∬�ェ蛻�莉・螟悶�ョ諠�蝣ア繧貞�ャ髢九☆繧九�ョ縺ッ豌励′蠑輔¢繧九��
縺溘→縺医d縺」縺ヲ繧よウ募セ倶ク雁撫鬘後↑縺�縺ィ縺励※繧よー励′蠑輔¢繧九��
縺ェ縺ョ縺ァ縲∫オ先棡逕サ蜒上□縺題ェュ縺ソ霎シ繧薙〒縺ソ縺溘��
縺昴@縺溘i縲√☆縺ァ縺ォ逕サ蜒丞喧縺輔l縺ヲ縺�縺溘��
k-space縺ョ繝�繝シ繧ソ縺ッ蜷ォ縺セ繧後※縺�縺ェ縺�繧医≧縺ァ縺吶��
縺セ縺√◎繧翫c縺昴≧縺�繧医↑縺√��
蛹サ逋ょセ謎コ玖��縺御コ梧ャ。蜈�繝輔�シ繝ェ繧ィ螟画鋤縺ェ繧薙〒驕翫�カ蠢�隕√↑縺�繧ゅs縲�
謔」閠�縺輔s繧貞勧縺代k諠�蝣ア縺悟ソ�隕√↑縺�縺代□繧ゅs縲�
縺ィ縺�縺�縺九�√が繝壹Ξ繝シ繧ソ繝シ縺ョ譁ケ縺後�√�渓-space縺ッ縺ゅ£繧峨l縺ェ縺�繧薙〒縺吶h縺峨�シ縲搾シ医↓縺薙▲�シ�
縺」縺ヲ縺翫▲縺励c縺」縺ヲ縺�縺滓ー励′縺吶k縲ゅ>縺セ縺輔i諤昴>蜃コ縺吶��
MRI縺ョ謚�蟶ォ縺ョ譁ケ縺ッ縲∽コ梧ャ。蜈�繝輔�シ繝ェ繧ィ螟画鋤遏・縺」縺ヲ縺溘�ゅd縺ッ繧贋スソ縺�繧薙□繧阪≧縺九��
dicom繝輔ぃ繧、繝ォ縺ョ荳ュ縺ォ縲∵ゥ滓「ー縺ョ陬ス騾�閠�縺ョ諠�蝣ア�シ茨シ滂シ峨′縲 ̄hilips縺ィ縺ェ縺」縺ヲ縺�縺溘��
Philips縺ィ縺�縺医�ー繧キ繧ァ繝シ繝舌�シ縺ョ莨夂、セ縺倥c縺ェ縺�縺ァ縺吶°縲�
鬮ュ蜑�繧翫□縺代§繧�縺ェ縺九▲縺溘s縺�縺ェ縲�
https://ja.wikipedia.org/w/index.php?title=%E3%83%95%E3%82%A3%E3%83%AA%E3%83%83%E3%83%97%E3%82%B9&oldid=69059425
pydicom
python縺ァdicom繝輔ぃ繧、繝ォ繧帝幕縺上↓縺ッ縲√%縺ョpydicom縺後>繧九��
dicom繝輔ぃ繧、繝ォ繧定ェュ繧�(逕サ蜒上�ョnumpy array繧貞性繧�繝�繝シ繧ソ縺ョ隱ュ縺ソ霎シ縺ソ)
matplotlib縺ァ縲]umpy array繧段mshow
縺ィ縺�縺�縲√>縺、繧ゅ�ョ豬√l縺�縺」縺溘��
Python縺ァDICOM逕サ蜒上r縺ェ繧薙→縺九☆繧九%縺薙′隧ウ縺励>縲�
cmd縺ァ縲∝�ィ驛ィ蟆乗枚蟄励〒縲�
pip install pydicom
縺�縺」縺滓ー励′縺吶k縲る俣驕輔▲縺ヲ縺溘i縺斐a繧薙��
蠖鍋┯縺ァ縺吶′pip縺悟�・縺」縺ヲ縺ェ縺�縺ィ縲∝柑縺九↑縺�縲�
迺ー蠅�
python2.7
pydicom, numpy, matplotlib, tkFileDialog
繧ウ繝シ繝�
縺薙�ョ繝励Ο繧ー繝ゥ繝�縺ッ縲.ICOM繝輔ぃ繧、繝ォ繧帝幕縺�縺ヲ縲∫判蜒丞喧縺吶k繝励Ο繧ー繝ゥ繝�縺ァ縺吶��
莠梧ャ。蜈�繝輔�シ繝ェ繧ィ螟画鋤縺励◆逕サ蜒上b菴オ縺帙※霈峨○縺ヲ縺�縺セ縺吶′縲√%繧後�ッ�シ亥、壼��シ洩-space縺ァ縺ッ縺ゅj縺セ縺帙s縲�
DICOM莉・螟悶�ョ繝輔ぃ繧、繝ォ繧帝幕縺�縺滓凾縺ョ謖吝虚縺ッ諢丞峙縺励※縺�縺セ縺帙s縲�
import pydicom as dc
import numpy as np
import matplotlib.pyplot as plt
import tkFileDialog as tk
name = tk.askopenfilename(filetypes=[("dicom",("all","*.*"))])
d = dc.read_file(name)
mx = np.max(np.real(np.fft.fft2(d.pixel_array)))/100000.
f,((ax1,ax2),(ax3,ax4))=plt.subplots(2,2)
ax1.imshow(d.pixel_array, cmap="gray")
ax2.imshow(np.abs(np.fft.fft2(d.pixel_array)), vmin=0, vmax=mx*1000, cmap="gray")
ax3.imshow(np.real(np.fft.fft2(d.pixel_array)), vmin=-mx, vmax=mx, cmap="gray")
ax4.imshow(np.imag(np.fft.fft2(d.pixel_array)), vmin=-mx, vmax=mx, cmap="gray")
ax1.set_title("pydicom pixel_data")
ax2.set_title("2DFFT absolute value")
ax3.set_title("2DFFT real part")
ax4.set_title("2DFFT imaginary part")
plt.show()
繝励Ο繧ー繝ゥ繝�縺ョ螳溯。梧婿豕�
荳願ィ倥さ繝シ繝峨r縲√ユ繧ュ繧ケ繝亥クウ縺ォ繧ウ繝斐�シ縺励�√�径.py縲阪→縺�縺�繝輔ぃ繧、繝ォ蜷阪〒菫晏ュ倥��
竊�
縺昴�ョ繝輔ぃ繧、繝ォ縺瑚。ィ遉コ縺輔l縺ヲ縺�繧九ヶ繝ゥ繧ヲ繧カ荳翫〒縲∵勸谿オ繧「繝峨Ξ繧ケ縺瑚。ィ遉コ縺輔l縺ヲ縺�繧九→縺薙m縺ォ縲�
cmd
縺ィ蜈・蜉帙@縲・nter繧ュ繝シ繧呈款縺吶��
�シ医b縺励¥縺ッ縲√ヶ繝ゥ繧ヲ繧カ荳翫�ョ菴輔b縺ェ縺�縺ィ縺薙m繧偵�《hift+蜿ウ繧ッ繝ェ繝�繧ッ縺励※縲√�訓owerShellWindow繧偵%縺薙↓髢九¥縲阪r縺吶k縲ゑシ�
竊�
縺昴≧縺吶k縺ィ縲�鮟偵>逕サ髱「�シ磯搨縺�逕サ髱「�シ峨′蜃コ縺ヲ縺上k縲�
竊�
縺昴�ョ逕サ髱「荳翫〒縲�
python a.py
縺ィ蜈・蜉帙☆繧九→縲∝ョ溯。後〒縺阪k縲ゅヵ繧。繧、繝ォ繝�繧、繧「繝ュ繧ー縺悟�コ繧九�ッ縺壹��
�シ�python縺ョ繝輔か繝ォ繝�縺後�∫腸蠅�螟画焚縺ァ驕ゥ蛻�縺ォ險ュ螳壹@縺ヲ縺ゅl縺ー縺ョ隧ア縲ゑシ嬰icom繝輔ぃ繧、繝ォ繧帝∈縺カ縺ィ縲∫判蜒上′陦ィ遉コ縺輔l繧九��
逕サ蜒上ョ繝シ繧ソ縺ッnumpy array縺ァ霑斐▲縺ヲ縺上k
逕サ蜒上�ッ縲[m諢溯ヲ壹〒謦ョ蠖ア縺輔l縺ヲ縺�繧九i縺励>
縺ェ縺ョ縺ァ縲�3谺。蜈�縺ァ繝励Ο繝�繝医☆繧後�ー縲∫ォ倶ス鍋噪縺ォ隕九l繧九�ッ縺壹��
螟ァ螟峨◎縺�縺�縺九i縺セ縺�繧�繧峨↑縺�縲�
|
Macros
Macro creation
A macro is a function called from the BUILD file that can instantiate rules. Macros don’t give additional power, they are just used for encapsulation and code reuse. By the end of the loading phase, macros don’t exist anymore, and Bazel sees only the set of rules they created.
Native rules (i.e. rules that don’t need a load() statement) can beinstantiated from the native module, e.g.
def my_macro(name, visibility=None): native.cc_library( name = name, srcs = ["main.cc"], visibility = visibility, )
If you need to know the package name (i.e. which BUILD file is calling the macro), use the function native.package_name().
Debugging
bazel query --output=build //my/path:allwill show you how the BUILD file looks after evaluation. All macros, globs, loops are expanded. Known limitation:selectexpressions are currently not shown in the output.
You may filter the output based on generator_function(which function generated the rules) orgenerator_name(the name attribute of the macro), e.g.
$ bazel query --output=build 'attr(generator_function, my_macro, //my/path:all)'
To find out where exactly the rule
foois generated in a BUILD file, you can try the following trick. Insert this line near the top of the BUILD file:cc_library(name = "foo"). Run Bazel. You will get an exception when the rulefoois created (due to a name conflict), which will show you the full stack trace.
You can also use print for debugging. It displaysthe message as a warning during the loading phase. Except in rare cases,either remove printcalls, or make them conditional under adebuggingparameter that defaults toFalsebefore submitting the code to the depot.
Errors
If you want to throw an error, use the fail function. Explain clearly to the user what went wrong and how to fix their BUILD file. It is not possible to catch an error.
def my_macro(name, deps, visibility=None): if len(deps) < 2: fail("Expected at least two values in deps") # ...
Conventions
All public functions (functions that don’t start with underscore) that instantiate rules must have a
nameargument. This argument should not be optional (don’t give a default value).
Public functions should use a docstring following Python conventions.
In BUILD files, the
nameargument of the macros must be a keyword argument (not a positional argument).
The
nameattribute of rules generated by a macro should include the name argument as a prefix. For example,macro(name = "foo")can generate acc_libraryfooand a genrulefoo_gen.
In most cases, optional parameters should have a default value of
None.Nonecan be passed directly to native rules, which treat it the same as if you had not passed in any argument. Thus, there is no need to replace it with0,False, or[]for this purpose. Instead, the macro should defer to the rules it creates, as their defaults may be complex or may change over time. Additionally, a parameter that is explicitly set to its default value looks different than one that is never set (or set toNone) when accessed through the query language or build-system internals.
Macros should have an optional
visibilityargument.
Full example
The typical use-case for a macro is when you want to reuse a genrule, e.g.
genrule( name = "file", outs = ["file.txt"], cmd = "$(location generator) some_arg > $@", tools = [":generator"],)
If you want to generate another file with different arguments, you may want to extract this code to a function.
The BUILD file will become simply:
load("//path:generator.bzl", "file_generator") file_generator( name = "file", arg = "some_arg", )
In order to keep BUILD files clean and declarative, you must put the function ina separate .bzl file. For example, write the definition of the macro inpath/generator.bzl:
def file_generator(name, arg, visibility=None): native.genrule( name = name, outs = [name + ".txt"], cmd = "$(location generator) %s > $@" % arg, tools = ["//test:generator"], visibility = visibility, )
When you want to investigate what a macro does, use the following command to see the expanded form:
$ bazel query --output=build :file # /absolute/path/test/ext.bzl:42:3 genrule( name = "file", tools = ["//test:generator"], outs = ["//test:file.txt"], cmd = "$(location generator) some_arg > $@", )
|
So you got lots of documents and need fast querying, huh? Or you have tons of data and need to process and extract metrics. Either way, Elasticsearch (ES) can be a powerful engine to help you index, query and extract metrics from its document-driven storage. This post is very straightforward and intends to show how to use python to interact with the engine and index/retrieve/query documents.
The python community has developed two well known projects: elasticsearch-py and elasticsearch-dsl. While the former provides some tools to interact with ES and, IMHO, a more granular control over the actions, the latter was built to help you with the search and persistence. Let’s check that.
Connecting
The first question is: how do I connect to ES? By using elasticsearch-dsl you can create a default connection that will be used globally:
connections.create_connection(hosts=['localhost'])
However, you might want to use a client and have a more granular control. By using elasticsearch-py you can achieve that:
from elasticsearch import Elasticsearch client = Elasticsearch(hosts, http_auth=(username, password), **kwargs)
Execute client.indices.get_alias("*") to retrieve the existent indexes and check it is properly configured.
Persisting
Storing our documents is easy because elasticsearch-dsl provides DocType – a class that takes care of mapping your python class to JSON documents. Instead of worrying about JSON structures, let’s create a document that stores the user hit to a specific page:
from elasticsearch_dsl import DocType, Integer, Date, Keyword class UserHit(DocType): page = Keyword() datetime = Date() user_id = Integer() environment = Keyword()
Pay attention to the fields we chose: Integer, Date, Keyword. They will be mapped to Elasticsearch engine which means that you can use specific features. For example, the datetime field can be used to search a date range or aggregate data by minute,hour, day, month. Another detail is the environment field: it a solution to integrate ES with diferent environments: staging, development and production. That way, you do not take the risk of mixing fake data to production data.
**Updated on Feb 4th **: There is another strategy to not mess with production data: create indexes concatenated with the app environment. By using an env var, your application can create different indexes (e.g. myindex-2018.02.01-production, myindex-2018.02.01-staging, myindex-2018.02.01-development). Thanks for the contribution Robson Peixoto.
Once you create the class indexing becomes easy:
user_hit = UserHit( page='product-list', datetime=datetime.now(), user_id=10, environment='production' ) UserHit.init(using=client, index=index_name) # Indexed can be True or False depending on the operation success indexed = user_hit.save(using=client, index='myindex-2018.02.01')
You must be attentive to two issues: (i) before using the document you must ensure the mappings in Elasticsearch are created and that’s why we have to use the init method in line 8; (ii) the return of the operation once the .save method can return either True or False.
Querying
How to query the documents? The snippet below illustrates a simple example.
from elasticsearch_dsl import Search from elasticsearch import ElasticsearchException search = (Search(using=client, index='myindex-2018.02.01') .sort('datetime') .query('match', page='product-list') .query('match', user_id=12) .query('match', environment='production')) count = search.count() search = search[0:count] response = search.execute() if not response.success(): raise ElasticsearchException('Fail to get the hits') return response.hits
It is important to mention that ES brings only 10 results by default and that’s why we need the lines 11 and 12.
Filtering
search = (Search(using=client, index='myindex-2018.02.01') .sort('datetime') .query('match', page='product-list') .query('match', user_id=12) .query('match', environment='production')) search = search.filter('range', datetime={'gte': from_datetime, 'lte': to_datetime, 'time_zone': time_zone_delta}) count = search.count() print(count) search = search[0:count] response = search.execute() print(response.hits)
You have just queried, but now you want to filter the results by a date range. The 7th line does the trick.
Aggregating
You can generate metrics based on date, for example. The 12th line tells ES to group the data by intervals of 30 minutes.
from elasticsearch_dsl import Search from elasticsearch import ElasticsearchException search = (Search(using=client, index='myindex-2018.02.01') .sort('datetime') .query('match', page='product-list') .query('match', user_id=12) .query('match', environment='production')) search = search.filter('range', datetime={'gte': from_datetime, 'lte': to_datetime, 'time_zone': time_zone_delta}) search.aggs.bucket('datetime', 'date_histogram', field='datetime', interval='30m') count = search.count() search = search[0:count] res = search.execute() if not response.success(): raise ElasticsearchException('Fail to get the hits') return res.aggs['datetime']['buckets']
|
M5StickVの問題がまったく解決しないので、とりあえず放置することにし、M5Stack Grayを手に入れた。ゆくゆくは回転式スイッチ(本体の向きによって操作するリモコンスイッチにする予定。
巷では、M5Stackの開発環境はM5UI.Flowに移行しつつあるようなので、version 1.3.2を入れてみたものの、GrayのGyroをどうやって読み取るのかがわからないので、当面はArduinoのFirmwareを入れて使うしかないかな、と思いきや、 http://www.openspc2.org/reibun/M5/UI-Flow/1.2.3/さんのところで、角度の検出方法も説明されていた。ありがとう!
手順
それぞれさらっと1行で書いているが、けっこうわけわからなくてはまる。とにかく、Arduino的に操作するためのfirmwareとか、micropythonで操作するためのfirmwareとか、UI.Flow用のファームウェアとかが素人には判別できないので、firmwareを自分でダウンロードして、ということは考えず、まずBurnerを本家から手にいれるべき。
M5Stack GrayをUSBでMacに接続
かってに電源が入り、何かが実行され、音がでる。かなり迷惑。
M5 Burner for Macをダウンロードして実行
その中で、UIFlow-1.3.2をダウンロード(けっこう時間かかる)
出先で作業していたので、回線が遅いのか、アプリがhaltしているのか判別できずやきもきする。
Burnボタンを押してfirmwareを書き込み。(USBtoUARTはすでにインストールされているものとする)
M5StackをCボタンを押しながら再起動し、手近のWifiネットワークに接続。
ボタンを押さないで起動すると、デモの無限ループに入ってしまう。
パソコンから、M5Stack自身が作るローカルWifiに接続し、ウェブ上でWifiアクセスポイントのSSIDとパスワードを入力する。
この時、SSIDに空白が入っていると空白が勝手に”+”におきかえられて正常に接続できない。
その場合には、以下のURLを直接たたく。SSIDの部分には、接続したいネットワークのSSID、ただし空白は’%20’と記述する。
http://192.168.4.1/configure?ssid=SSID&password=パスワード
正常に接続できれば、M5Stackの画面にはAPI keyとQRコード、画面右上に緑のドットが表示される。
Mac側では、http://flow.m5stack.com にアクセス。IDEが表示される。
M5Stack側に表示されたAPI keyをMacのIDEに入力し、M5Stackと接続。
この接続が不安定でなかなか思うようにいかない。
てきとうにコードを組んで、IDE上で実行ボタン(右上の三角)を押すと、M5上で実行される。
Block programmingは自動的にMicroPythonに翻訳される。これはありがたい。
最後にUploadボタン(画面左下のアイコン列の一番右)を押すとM5のフラッシュに書きこまれ、リセット後もそれが実行されるようになる。
プログラムを書換える場合は、一旦リセットし、setupに入ってwifiにつなぎなおす。
一旦準備がととのえば、8〜10を往復するだけでどんどんプログラムを改良していけるので楽。また、Obnizと比べると、Flashとバッテリーをもっているおかげで、オフラインでも使えるし電源なしでも使えるのがありがたい。
Obnizとの比較
UI.FlowのIDEは、ObnizのIDEのようにCloud化されてはいない。すなわち、API Keyを入力すれば、どこからでも現在のプログラムが表示され、編集できる、というわけではなく、自分でコードを保管する必要がある。
Obnizだと、二人で同時プログラムすることさえできてしまうが、UI.Flowのほうは逆に、2つのIDEが同じAPI Keyを指定していると、片方しかアクセスできない(もう片方はロックアウトされる)ので、うかつにコードを編集したまま離席して、喫茶店でコーディングしようとした時にはまる?
反面、一旦書き込んでしまえば、実行時にはネットにつながっている必要がない。Obnizだと、クラウドに常時接続でないと使えない。(書き込む、という操作自体がない)
Wifi APを切り替えるのに、いちいちPCで設定する必要があるのか?起動時メニューでAPのリストは表示されるが、指定できるわけではなさそう。
開発はCloud経由でもいいのだが、実行時には外のネットワークとつながっていない、ローカルネット上で動かしたい、という場合。Flashに書きこむ瞬間はグローバルネットにつながっていなければいけないのだが、書きこんだあとで、APだけ切り替える方法がわからない。
なぜかM5からサーバ(flow.m5stack.com)につながらない場合が多く、その間開発が中断するのが最大のストレス。今開発しているコードに関しては、MicroPythonは使えそうにないのでArduino IDEに戻る。
とりあえずコーディング
デバイスの角度にあわせて画像を動かしたかったのだが、gyroの値は角度ではなくどうも角加速度のようなので、よっぽど精密に積分しないと角度を得ることは難しそうだ。重力加速度の方向もつかって補正すればなんとかなるかもしれないが、いまはとりあえず重力の方向だけでも表示してみよう。
from m5stack import * from m5ui import * from uiflow import * import imu setScreenColor(0x111111) imu0 = imu.IMU() import math w,h = lcd.screensize() while True: lcd.clear() x,y,z = imu0.acceleration r = (x*x + y*y + z*z)**0.5 x /= r y /= r z /= r cx = w/2 cy = h/2 R = 20*math.exp(-z/2) LR = R*5 L = LR-R if z > 0: lcd.circle(int(cx-x*LR),int(cy+y*LR),int(R),fillcolor=0xff0000) lcd.line(int(cx), int(cy), int(cx-x*L), int(cy+y*L), 0xffffff) if z < 0: lcd.circle(int(cx-x*LR),int(cy+y*LR),int(R),fillcolor=0x0000ff) wait_ms(2)
コードはgithubに置きます。
これを改良し、重しをゴムにしたもの。
from m5stack import * from m5ui import * from uiflow import * import imu setScreenColor(0x111111) imu0 = imu.IMU() import math w,h = lcd.screensize() cx = w//2 cy = h//2 def draw(x,y,z,r): scale = math.exp(-z/200) r = int(r*scale) x = int(x*scale) y = int(y*scale) if z > 0: lcd.circle(cx-x, cy+y, r, fillcolor=0xff0000) lcd.line(cx, cy, cx-x, cy+y, 0xffffff) if z < 0: lcd.circle(cx-x, cy+y, r, fillcolor=0x0000ff) x,y,z = 0,10,0 vx,vy,vz = 0,0,0 eL = 100 # k / m km = 0.01 dump = 0.01 import utime last = utime.ticks_ms() while True: now = utime.ticks_ms() dt = (now - last)/10 last = now ax,ay,az = imu0.acceleration L = (x**2+y**2+z**2)**0.5 ax -= vx*dump ay -= vy*dump az -= vz*dump if L > eL: F = -km*(L-eL) / L ax += x*F ay += y*F az += z*F vx += ax*dt vy += ay*dt vz += az*dt x += vx*dt y += vy*dt z += vz*dt lcd.clear() draw(x,y,z,20) wait_ms(2)
もうちょっとがんばって、Attitude indicator (航空機の姿勢表示器)を作ってみた。
from m5stack import * from m5ui import * from uiflow import * import imu setScreenColor(0x111111) sevenseg = [0b1110111, 0b0010010, 0b1011101, 0b1011011, 0b0111010, 0b1101011,0b0101111,0b1010010,0b1111111,0b1111010] imu0 = imu.IMU() def letter(L,x,y,dx,dy,c): if L & 0b1000000: lcd.line(x+dy,y-dx,x+dx+dy,y+dy-dx,c) if L & 0b0100000: lcd.line(x,y,x+dy,y-dx,c) if L & 0b0010000: lcd.line(x+dx+dy,y+dy-dx,x+dx,y+dy,c) if L & 0b0001000: lcd.line(x,y,x+dx,y+dy,c) if L & 0b0000100: lcd.line(x,y,x-dy,y+dx,c) if L & 0b0000010: lcd.line(x+dx,y+dy,x+dx-dy,y+dy+dx,c) if L & 0b0000001: lcd.line(x-dy,y+dx,x+dx-dy,y+dy+dx,c) def number(v,x,y,dx,dy,c): x0, y0 = x,y for L in str(v): letter(sevenseg[int(L)], x0,y0,dx,dy,c) x0 += dx*3//2 y0 += dy import math w,h = lcd.screensize() cx, cy = w//2, h//2 hist = [] sx, sy, sz = 0,0,0 while True: x,y,z = imu0.acceleration hist.append((x,y,z)) sx, sy, sz = sx+x, sy+y, sz+z if len(hist)>3: x,y,z = hist.pop(0) sx, sy, sz = sx-x, sy-y, sz-z # bank = math.atan(x,y) # average x,y,z = sx/3, sy/3, sz/3 br = (x**2+y**2)**0.5 bx = int(x/br*100) by = int(y/br*100) ba = math.atan2(x,y) pitch = math.atan2(z,y)*180/3.14 r = h/2 - 20 L = 10 lcd.clear() for a,l in ((-60,2),(-45,1),(-30,2),(-20,1),(-10,1),(0,2),(10,1),(20,1),(30,2),(45,1),(60,2)): aa = ba + a*3.14/180 c,s = math.cos(aa),math.sin(aa) l = l*L+r lcd.line(cx+int(r*s),cy-int(r*c),cx+int(l*s), cy-int(l*c), 0x008000) lcd.line(cx-100,cy,cx-20,cy,0x808080) lcd.line(cx+20,cy,cx+100,cy,0x808080) lcd.line(cx,20,cx+10,40,0x808080) lcd.line(cx,20,cx-10,40,0x808080) lcd.line(cx-by*2,cy-bx*2,cx+by*2,cy+bx*2,0x008000) for i in range(-3,4): px = int(x/br*(pitch+i*10)*3) py = int(y/br*(pitch+i*10)*3) scale = abs(i) + 1 lcd.line(cx-by*scale//6+px,cy-bx*scale//6-py,cx+by*scale//6+px,cy+bx*scale//6-py,0x008000) number(abs(i*10), cx+by*(scale+1)//6+px, cy+bx*(scale+1)//6-py, by//12, bx//12, 0x008000) number(abs(i*10), cx-by*(scale+2)//6+px, cy-bx*(scale+2)//6-py, by//12, bx//12, 0x008000) wait_ms(1)
Arduino
MicroPythonで幸せになれるかと思ったが、やはり遅い。特に、グラフィックスライブラリの遅さをごまかしきれない(バッファーに描けないので)。
以下のサイトの説明に従いfirmwareとlibraryをインストール。
…
画面がちらちらするのは、Spriteを使ってダブルバッファーに作画すればいいことがわかった。
いろいろ苦労した結果、一応姿勢表示器らしいものが作れた。
コードはこちら。
|
데이터 안에 숨겨진 정보를 파악하여 의미있는 스토리를 제공하기 위해서는 올바른 데이터의 수집이 필수적이다.특정한 분석을 위해 스스로 데이터를 수집할 수도 있지만 이에는 한계가 있고,
빅데이터를 수집하는 기술로는 SNS, 뉴스 등의 웹정보를 인터넷에서 수집하는 크롤링(crawling), 각종 센서를 이용해 수집하는 센싱(sensing), 분산 시스템에서 데이터베이스 관리 시스템인 카산드라(Cassandra),
실습 csv파일을 불러온다.
hr <- read.csv("C:/data/hrdata.csv",header=T, stringsAsFactors = T)
names(hr)
## [1] "id" "satisfaction_level" "last_evaluation" ## [4] "number_project" "average_montly_hours" "time_spend_company" ## [7] "Work_accident" "left" "promotion_last_5years"## [10] "sales" "salary" "sex"
실습 옵션없이 csv파일을 불러온다.
hr1 <- read.csv("C:/data/hrdata.csv")
names(hr1)
## [1] "id" "satisfaction_level" "last_evaluation" ## [4] "number_project" "average_montly_hours" "time_spend_company" ## [7] "Work_accident" "left" "promotion_last_5years"## [10] "sales" "salary" "sex"
실습1 ’readxl’패키지를 활용하여 엑셀파일 불러오기.
library(readxl)
hre <- read_excel("C:/data/hrdata.xlsx",sheet = 1,col_names = T)
names(hre)
## [1] "id" "satisfaction_level" "last_evaluation" ## [4] "number_project" "average_montly_hours" "time_spend_company" ## [7] "Work_accident" "left" "promotion_last_5years"## [10] "sales" "salary" "sex"
실습2 ’xlsx’패키지를 활용하여 엑셀파일 불러오기.
#install.packages("xlsx")
library(xlsx)
hre1 <- read_excel("C:/data/hrdata.xlsx",sheet = 1,col_names = T)
names(hre1)
## [1] "id" "satisfaction_level" "last_evaluation" ## [4] "number_project" "average_montly_hours" "time_spend_company" ## [7] "Work_accident" "left" "promotion_last_5years"## [10] "sales" "salary" "sex"
실습1 인코딩여부 확인없이 txt파일불러오기.
job <- readLines("C:/study/dataset/일자리.txt")
job
## [1] "癤우젣議곗뾽 \xec씪\xec옄由ъ\x99\u0080 40\xeb\x8c\u0080 \xec씪\xec옄由\xac 6遺꾧린 留뚯뿉 '利앷\xb0\u0080' \xec쟾\xed솚"
## [2] ""
## [3] "留뚯꽦\xec쟻\xec씤 媛먯냼\xec꽭媛\u0080 \xec씠\xec뼱吏\u0080\xeb뜕 \xec젣議곗뾽怨\xbc 40\xeb\x8c\u0080 \xec씪\xec옄由ш\xb0\u0080 利앷\xb0\u0080 \xec쟾\xed솚\xec뿉 \xec꽦怨듯뻽\xeb떎."
## [4] ""
## [5] "\xed넻怨꾩껌\xec씠 27\xec씪 諛쒗몴\xed븳 '2019\xeb뀈 3遺꾧린(8\xec썡 湲곗\xa4\u0080) \xec엫湲덇렐濡\x9c \xec씪\xec옄由щ룞\xed뼢'\xec뿉 \xeb뵲瑜대㈃ 吏\u0080\xeb궃\xed빐 3遺꾧린 \xec쟾泥\xb4 \xec엫湲덇렐濡\x9c \xec씪\xec옄由щ뒗 1873留\x8c 9000媛쒖\x98\u0080\xeb떎."
## [6] ""
## [7] "\xec쟾\xeb뀈\xeb룄\xec씤 2018\xeb뀈 3遺꾧린蹂대떎 63留\x8c 5000媛\x9c(3.5%) 利앷\xb0\u0080\xed븳 寃껋쑝濡\x9c, 2018\xeb뀈 1遺꾧린遺\u0080\xed꽣 愿\u0080\xeb젴 \xed넻怨꾨\xa5\xbc \xec옉\xec꽦\xed븳 \xec씠\xeb옒 利앷\xb0\u0080 \xed룺\xec씠 媛\u0080\xec옣 而몃떎."
## [8] ""
## [9] "\xed듅\xed엳 \xec젣議곗뾽 \xec씪\xec옄由ш\xb0\u0080 2018\xeb뀈 3遺꾧린 419留\x8c 6000媛쒖뿉\xec꽌 吏\u0080\xeb궃\xed빐 3遺꾧린 419留\x8c 9000媛쒕줈 3000媛쒓\xb0\u0080 \xeb뒛\xec뿀\xeb떎."
## [10] ""
## [11] "\xec쟾\xeb뀈 \xeb룞湲\xb0 \xeb\x8c\u0080鍮\x84 利앷\xb0\u0080\xec쑉\xec씠 鍮꾨줉 0.1%濡\x9c 誘몃\xaf명븯吏\u0080留\x8c, 2018\xeb뀈 1遺꾧린 0.1% 利앷\xb0\u0080瑜\xbc 湲곕줉\xed븳 \xec씠\xed썑 \xeb궡\xeb궡 媛먯냼瑜\xbc 湲곕줉\xed븯\xeb떎媛\u0080 6遺꾧린 留뚯뿉 \xeb떎\xec떆 利앷\xb0\u0080濡\x9c \xeb룎\xec븘\xec꽑 寃껋씠\xeb떎."
## [12] ""
## [13] "\xec쟾泥\xb4 \xec엫湲덇렐濡\x9c \xec씪\xec옄由\xac 媛\u0080\xec슫\xeb뜲 \xec젣議곗뾽 鍮꾩쨷\xec씠 22.4%濡\x9c \xec븬\xeb룄\xec쟻(\xec젣議곗뾽 \xeb떎\xec쓬\xec\x9d\u0080 \xeb룄\xec냼留ㅺ\xb0\u0080 10.9%)\xec씤 \xec젏\xec쓣 怨좊젮\xed븯硫\xb4 \xec씠踰\x88 \xec젣議곗뾽 \xec씪\xec옄由\xac 利앷\xb0\u0080媛\u0080 \xeb뜑\xec슧 \xeb룍蹂댁씤\xeb떎."
## [14] ""
## [15] "吏\u0080\xeb궃\xed빐 3遺꾧린 \xec씪\xec옄由ш\xb0\u0080 媛\u0080\xec옣 留롮씠 利앷\xb0\u0080\xed븳 \xec궛\xec뾽\xec\x9d\u0080 蹂닿굔쨌\xec궗\xed쉶蹂듭\xa7\u0080(16留\x8c 6000媛\x9c)\xec\x98\u0080\xeb떎."
## [16] ""
## [17] "\xec씠\xec뼱 \xeb룄\xec냼留\xa4(7留\x8c 9000媛\x9c), 怨듦났\xed뻾\xec젙(6留\x8c 7000媛\x9c), \xec쟾臾맞룰낵\xed븰쨌湲곗닠(5留\x8c 7000媛\x9c) \xeb벑\xec쓽 \xec닚\xec쑝濡\x9c \xec씪\xec옄由\xac 利앷\xb0\u0080媛\u0080 留롮븯\xeb떎."
## [18] ""
## [19] "嫄댁꽕\xec뾽\xeb룄 吏\u0080\xeb궃\xed빐 3遺꾧린 \xec씪\xec옄由ш\xb0\u0080 \xec쟾\xeb뀈 \xeb룞湲\xb0 \xeb\x8c\u0080鍮\x84 3留\x8c 2000媛\x9c 利앷\xb0\u0080\xed븯硫\xb0 愿\u0080\xeb젴 \xed넻怨\x84 \xec옉\xec꽦 \xec씠\xed썑 泥섏쓬\xec쑝濡\x9c 利앷\xb0\u0080(1.8%)瑜\xbc 湲곕줉\xed뻽\xeb떎."
## [20] ""
## [21] "\xed넻怨꾩껌\xec\x9d\u0080 洹몃윭\xeb굹 \"\xec씠\xeb뒗 嫄댁꽕 寃쎄린媛\u0080 醫뗭븘\xec졇\xec꽌媛\u0080 \xec븘\xeb땲\xeb씪 鍮꾧탳 \xeb\x8c\u0080\xec긽\xec씤 2018\xeb뀈 3遺꾧린 嫄댁꽕\xec뾽\xec씠 \xeb떦\xec떆 \xed룺\xec뿼 \xed깛\xec뿉 \xec썙\xeb굺 醫뗭\xa7\u0080 \xec븡\xec븯\xeb뜕 '湲곗\xa0\u0080\xed슚怨\xbc' \xec쁺\xed뼢\"\xec씠\xeb씪怨\xa0 \xec꽕紐낇뻽\xeb떎."
## [22] ""
## [23] "\xec젣議곗뾽 \xeb벑 \xec씪\xec옄由\xac 利앷\xb0\u0080\xec뿉 \xed옒\xec엯\xec뼱 \xec슦由\xac \xec궗\xed쉶 怨좎슜 臾몄젣\xec쓽 \xed빑\xec떖 以\x91 \xed븯\xeb굹\xec씤 40\xeb\x8c\u0080 \xec씪\xec옄由\xac \xec뿭\xec떆 媛먯냼\xec뿉\xec꽌 利앷\xb0\u0080濡\x9c 諛섏쟾\xed뻽\xeb떎."
## [24] ""
## [25] "吏\u0080\xeb궃\xed빐 3遺꾧린 40\xeb\x8c\u0080 \xec씪\xec옄由щ뒗 \xec쟾\xeb뀈 \xeb룞湲\xb0 \xeb\x8c\u0080鍮\x84 3留\x8c 4000媛\x9c 利앷\xb0\u0080\xed뻽\xeb떎."
## [26] ""
## [27] "40\xeb\x8c\u0080 \xec씪\xec옄由ш\xb0\u0080 \xec쟾\xeb뀈 \xeb룞湲\xb0 \xeb\x8c\u0080鍮\x84 利앷\xb0\u0080\xed븳 嫄\xb4 2018\xeb뀈 1遺꾧린 2留\x8c 2000媛\x9c 利앷\xb0\u0080 \xec씠\xed썑 \xec뿭\xec떆 6遺꾧린 留뚯뿉 泥섏쓬\xec씠\xeb떎."
## [28] ""
## [29] "\xed븳\xed렪, 吏\u0080\xeb궃\xed빐 3遺꾧린\xec뿉\xeb뒗 40\xeb\x8c\u0080瑜\xbc \xed룷\xed븿\xed빐 \xec쟾 \xec뿰\xeb졊\xeb\x8c\u0080\xec뿉\xec꽌 \xec쟾\xeb뀈 \xeb룞湲\xb0 \xeb\x8c\u0080鍮\x84 \xec씪\xec옄由ш\xb0\u0080 利앷\xb0\u0080\xed뻽\xeb떎."
## [30] ""
## [31] "\xec젙遺\u0080\xec쓽 '\xeb끂\xec씤 \xec씪\xec옄由\xac \xec궗\xec뾽' \xed슚怨쇰\xa5\xbc \xeb늻由ш퀬 \xec엳\xeb뒗 60\xeb\x8c\u0080 \xec씠\xec긽 \xec씪\xec옄由ш\xb0\u0080 28留\x8c 媛\x9c(\xec쟾泥\xb4 \xec씪\xec옄由\xac 利앷\xb0\u0080\xec쓽 44.1%)濡\x9c 媛\u0080\xec옣 留롮씠 \xeb뒛\xec뿀\xeb떎."
## [32] ""
## [33] "50\xeb\x8c\u0080 \xec씪\xec옄由\xac 利앷\xb0\u0080媛\u0080 23留\x8c 1000媛\x9c(\xec쟾泥\xb4 \xec씪\xec옄由\xac 利앷\xb0\u0080\xec쓽 36.4%)濡\x9c 洹몃떎\xec쓬\xec씠\xec뿀怨\xa0, 20\xeb\x8c\u0080 \xec씠\xed븯 \xec씪\xec옄由\xac 利앷\xb0\u0080\xeb뒗 8留\x8c 2000媛\x9c(\xec쟾泥\xb4 \xec씪\xec옄由\xac 利앷\xb0\u0080\xec쓽 12.9%)\xec\x98\u0080\xeb떎."
## [34] ""
## [35] "諛섎㈃, 30\xeb\x8c\u0080 \xec씪\xec옄由\xac 利앷\xb0\u0080\xeb뒗 8000媛쒖뿉 洹몄낀\xeb떎."
## [36] ""
## [37] "吏\u0080\xeb궃\xed빐 3遺꾧린\xec뿉 利앷\xb0\u0080\xed븳 \xec쟾泥\xb4 \xec엫湲덇렐濡\x9c \xec씪\xec옄由\xac 63留\x8c 5000媛쒖뿉\xec꽌 30\xeb\x8c\u0080\xec\x99\u0080 40\xeb\x8c\u0080媛\u0080 李⑥\xa7\u0080\xed븯\xeb뒗 鍮꾩쨷\xec\x9d\u0080 6.6%\xec뿉 遺덇낵\xed뻽\xeb떎."
## [38] ""
## [39] "30\xeb\x8c\u0080\xec\x99\u0080 40\xeb\x8c\u0080 \xec씪\xec옄由ш\xb0\u0080 \xec슦由\xac \xec궗\xed쉶 怨좎슜 臾몄젣 \xed빐寃곗쓽 \xec뿴\xec뇿\xec엫\xec씠 \xeb떎\xec떆 \xed븳踰\x88 \xed솗\xec씤\xeb맂 \xec뀍\xec씠\xeb떎."
실습2 인코딩 설정후 txt파일 불러오기.
job <- readLines("C:/study/dataset/일자리.txt",encoding='UTF-8')
job
## [1] "<U+FEFF>제조업 일자리와 40대 일자리 6분기 만에 '증가' 전환"
## [2] ""
## [3] "만성적인 감소세가 이어지던 제조업과 40대 일자리가 증가 전환에 성공했다."
## [4] ""
## [5] "통계청이 27일 발표한 '2019년 3분기(8월 기준) 임금근로 일자리동향'에 따르면 지난해 3분기 전체 임금근로 일자리는 1873만 9000개였다."
## [6] ""
## [7] "전년도인 2018년 3분기보다 63만 5000개(3.5%) 증가한 것으로, 2018년 1분기부터 관련 통계를 작성한 이래 증가 폭이 가장 컸다."
## [8] ""
## [9] "특히 제조업 일자리가 2018년 3분기 419만 6000개에서 지난해 3분기 419만 9000개로 3000개가 늘었다."
## [10] ""
## [11] "전년 동기 대비 증가율이 비록 0.1%로 미미하지만, 2018년 1분기 0.1% 증가를 기록한 이후 내내 감소를 기록하다가 6분기 만에 다시 증가로 돌아선 것이다."
## [12] ""
## [13] "전체 임금근로 일자리 가운데 제조업 비중이 22.4%로 압도적(제조업 다음은 도소매가 10.9%)인 점을 고려하면 이번 제조업 일자리 증가가 더욱 돋보인다."
## [14] ""
## [15] "지난해 3분기 일자리가 가장 많이 증가한 산업은 보건·사회복지(16만 6000개)였다."
## [16] ""
## [17] "이어 도소매(7만 9000개), 공공행정(6만 7000개), 전문·과학·기술(5만 7000개) 등의 순으로 일자리 증가가 많았다."
## [18] ""
## [19] "건설업도 지난해 3분기 일자리가 전년 동기 대비 3만 2000개 증가하며 관련 통계 작성 이후 처음으로 증가(1.8%)를 기록했다."
## [20] ""
## [21] "통계청은 그러나 \"이는 건설 경기가 좋아져서가 아니라 비교 대상인 2018년 3분기 건설업이 당시 폭염 탓에 워낙 좋지 않았던 '기저효과' 영향\"이라고 설명했다."
## [22] ""
## [23] "제조업 등 일자리 증가에 힘입어 우리 사회 고용 문제의 핵심 중 하나인 40대 일자리 역시 감소에서 증가로 반전했다."
## [24] ""
## [25] "지난해 3분기 40대 일자리는 전년 동기 대비 3만 4000개 증가했다."
## [26] ""
## [27] "40대 일자리가 전년 동기 대비 증가한 건 2018년 1분기 2만 2000개 증가 이후 역시 6분기 만에 처음이다."
## [28] ""
## [29] "한편, 지난해 3분기에는 40대를 포함해 전 연령대에서 전년 동기 대비 일자리가 증가했다."
## [30] ""
## [31] "정부의 '노인 일자리 사업' 효과를 누리고 있는 60대 이상 일자리가 28만 개(전체 일자리 증가의 44.1%)로 가장 많이 늘었다."
## [32] ""
## [33] "50대 일자리 증가가 23만 1000개(전체 일자리 증가의 36.4%)로 그다음이었고, 20대 이하 일자리 증가는 8만 2000개(전체 일자리 증가의 12.9%)였다."
## [34] ""
## [35] "반면, 30대 일자리 증가는 8000개에 그쳤다."
## [36] ""
## [37] "지난해 3분기에 증가한 전체 임금근로 일자리 63만 5000개에서 30대와 40대가 차지하는 비중은 6.6%에 불과했다."
## [38] ""
## [39] "30대와 40대 일자리가 우리 사회 고용 문제 해결의 열쇠임이 다시 한번 확인된 셈이다."
실습 pdf파일을 문자열 데이터로 불러오기.
#install.packages('pdftools')
library(pdftools)
moon <- pdf_text("C:/study/dataset/신년사.pdf")
moon
|
Kerasä¼´æèµ°æ¥ #
åæ³èµ·è¿å ¥æºå¨å¦ä¹ é¢åçè¿ä¸¤ä¸å¹´æ¥ï¼Kerasæ¯ä¸ç´éªä¼´å¨ç¬è ç身边ãè¦ä¸æ¯å½ååæè¿è¿ä¸ªåæ¶ç¢°å°äºKerasè¿ä¸ªè¿ä¹æç¨çæ¡æ¶ï¼è½å¿«éå®ç°æçæ³æ³ï¼æä¹ä¸ç¡®å®ææ¯å¦è½ææ¯ ååæä¸æ¥ï¼æ¯ç«å½åæ¯theanoãpylearnãcaffeãtorchçç天ä¸ï¼åªæå¨ä»å¤©å®ä»¬å¯¹ææ¥è¯´ä»ç¶å天书ä¸è¬ã
åæ¥ä¸ºäºæå±è§éï¼æä¹å»å¦ä¹ äºä¸æ®µæ¶é´çtensorflowï¼ç¨çº¯tensorflowåè¿è¥å¹²ç¨åºï¼ä½ä¸ç®¡ææ ·ï¼ä»ç¶æ æ³å²èKerasãéç对Kerasçäºè§£çæ·±å ¥ï¼å°¤å ¶æ¯è±äºä¸ç¹æ¶é´ç ç©¶è¿Kerasçæºç åï¼æåç°Keras并没æå¤§å®¶è¯ç ç飿 ·âæ¬ ç¼ºçµæ´»æ§âãäºå®ä¸ï¼Kerasé£ç²¾å·§çå°è£ ï¼å¯ä»¥è®©æä»¬è½»æ¾å®ç°å¾å¤å¤æçåè½ãæè¶æ¥è¶æè§ï¼Keras忝ä¸ä»¶é常精ç¾çèºæ¯åï¼å åä½ç°äºKerasçå¼åè 们深åçåä½ååã
æ¬æä»ç»Kerasä¸èªå®ä¹æ¨¡åçä¸äºå 容ï¼ç¸å¯¹èè¨ï¼è¿å±äºKerasè¿é¶çå 容ï¼åå ¥é¨çæåè¯·ææ¶å¿½ç¥ã
å±çèªå®ä¹ #
è¿éä»ç»Kerasä¸èªå®ä¹å±åå ¶ä¸äºè¿ç¨æå·§ï¼å¨è¿ä¹ä¸æä»¬å¯ä»¥çå°Keraså±ç精巧ä¹å¤ã
åºæ¬å®ä¹æ¹æ³ #
å¨Kerasä¸ï¼èªå®ä¹å±çæç®åæ¹æ³æ¯éè¿Lambdaå±çæ¹å¼ï¼
from keras.layers import *
from keras import backend as K
x_in = Input(shape=(10,))
x = Lambda(lambda x: x+2)(x_in) # å¯¹è¾“å…¥åŠ ä¸Š2
ææ¶åï¼æä»¬å¸æåºåè®ç»é¶æ®µåæµè¯é¶æ®µï¼æ¯å¦è®ç»é¶æ®µç»è¾å ¥å å ¥ä¸äºåªå£°ï¼èæµè¯é¶æ®µå廿åªå£°ï¼è¿éè¦ç¨K.in_train_phaseå®ç°ï¼æ¯å¦
def add_noise_in_train(x):
x_ = x + K.random_normal(shape=K.shape(x)) # åŠ ä¸Šæ ‡å‡†é«˜æ–¯å™ªå£°
return K.in_train_phase(x_, x)
x_in = Input(shape=(10,))
x = Lambda(add_noise_in_train)(x_in) # è®ç»ƒé˜¶æ®µåŠ å…¥é«˜æ–¯å™ªå£°ï¼Œæµ‹è¯•é˜¶æ®µåŽ»æŽ‰
å½ç¶ï¼Lambdaå±ä»
ä»
éç¨äºä¸éè¦å¢å è®ç»åæ°çæ
å½¢ï¼å¦ææ³è¦å®ç°çåè½éè¦å¾æ¨¡åæ°å¢åæ°ï¼é£ä¹å°±å¿
é¡»è¦ç¨å°èªå®ä¹Layeräºãå
¶å®è¿ä¹ä¸å¤æï¼ç¸æ¯äºLambdaå±åªä¸è¿ä»£ç å¤äºå è¡ï¼å®æ¹æç« å·²ç»åå¾å¾æ¸
æ¥äºï¼
https://keras.io/layers/writing-your-own-keras-layers/
è¿éæå®é¡µé¢ä¸çä¾åæ¬è¿æ¥ï¼
class MyLayer(Layer):
def __init__(self, output_dim, **kwargs):
self.output_dim = output_dim # å¯ä»¥è‡ªå®šä¹‰ä¸€äº›å±žæ€§ï¼Œæ–¹ä¾¿è°ƒç”¨
super(MyLayer, self).__init__(**kwargs) # å¿…é¡»
def build(self, input_shape):
# æ·»åŠ å¯è®ç»ƒå‚æ•°
self.kernel = self.add_weight(name='kernel',
shape=(input_shape[1], self.output_dim),
initializer='uniform',
trainable=True)
def call(self, x):
# 定义功能,相当于Lambda层的功能函数
return K.dot(x, self.kernel)
def compute_output_shape(self, input_shape):
# 计算输出形状,如果输入和输出形状一致,那么å¯ä»¥çœç•¥ï¼Œå¦åˆ™æœ€å¥½åŠ ä¸Š
return (input_shape[0], self.output_dim)
åè¾åºçå± #
å¹³æ¶æä»¬ç¢°å°çææå±ï¼å ä¹é½æ¯åè¾åºçï¼å æ¬Kerasä¸èªå¸¦çææå±ï¼é½æ¯ä¸ä¸ªæè å¤ä¸ªè¾å ¥ï¼ç¶åè¿åä¸ä¸ªç»æè¾åºçãé£ä¹Keraså¯ä¸å¯ä»¥å®ä¹åè¾åºçå±å¢ï¼çæ¡æ¯å¯ä»¥ï¼ä½è¦æç¡®å®ä¹å¥½output_shapeï¼æ¯å¦ä¸é¢è¿ä¸ªå±ï¼ç®åå°å°è¾å ¥åå¼å两åï¼å¹¶ä¸åæ¶è¿åã
class SplitVector(Layer):
def __init__(self, **kwargs):
super(SplitVector, self).__init__(**kwargs)
def call(self, inputs):
# 按第二个维度对tensor进行切片,返回一个list
in_dim = K.int_shape(inputs)[-1]
return [inputs[:, :in_dim//2], inputs[:, in_dim//2:]]
def compute_output_shape(self, input_shape):
# output_shapeä¹Ÿè¦æ˜¯å¯¹åº”çš„list
in_dim = input_shape[-1]
return [(None, in_dim//2), (None, in_dim-in_dim//2)]
x1, x2 = SplitVector()(x_in) # 使用方法
å±ä¸lossçç»å #
æäºãKerasä¸èªå®ä¹å¤æçloss彿°ã䏿ç»éªç读è å¯ä»¥ç¥éï¼Kerasä¸å¯¹lossçåºæ¬å®ä¹æ¯ä¸ä¸ªè¾å ¥ä¸ºy_trueåy_pred彿°ãä½å¨æ¯è¾å¤æçæ åµä¸ï¼å®ä¸ä» ä» æ¯é¢æµå¼åç®æ å¼ç彿°ï¼è¿å¯ä»¥ç»åæéè¿è¡å¤æçè¿ç®ã
è¿é忬¡ä»¥center loss为ä¾ï¼ä»ç»ä¸ç§åºäºèªå®ä¹å±çåæ³ã
class Dense_with_Center_loss(Layer):
def __init__(self, output_dim, **kwargs):
self.output_dim = output_dim
super(Dense_with_Center_loss, self).__init__(**kwargs)
def build(self, input_shape):
# æ·»åŠ å¯è®ç»ƒå‚æ•°
self.kernel = self.add_weight(name='kernel',
shape=(input_shape[1], self.output_dim),
initializer='glorot_normal',
trainable=True)
self.bias = self.add_weight(name='bias',
shape=(self.output_dim,),
initializer='zeros',
trainable=True)
self.centers = self.add_weight(name='centers',
shape=(self.output_dim, input_shape[1]),
initializer='glorot_normal',
trainable=True)
def call(self, inputs):
# 对于center lossæ¥è¯´ï¼Œè¿”回结果还是跟Dense的返回结果一致
# æ‰€ä»¥è¿˜æ˜¯æ™®é€šçš„çŸ©é˜µä¹˜æ³•åŠ ä¸Šåç½®
self.inputs = inputs
return K.dot(inputs, self.kernel) + self.bias
def compute_output_shape(self, input_shape):
return (input_shape[0], self.output_dim)
def loss(self, y_true, y_pred, lamb=0.5):
# 定义完整的loss
y_true = K.cast(y_true, 'int32') # ä¿è¯y_trueçš„dtype为int32
crossentropy = K.sparse_categorical_crossentropy(y_true, y_pred, from_logits=True)
centers = K.gather(self.centers, y_true[:, 0]) # å–å‡ºæ ·æœ¬ä¸å¿ƒ
center_loss = K.sum(K.square(centers - self.inputs), axis=1) # 计算center loss
return crossentropy + lamb * center_loss
f_size = 2
x_in = Input(shape=(784,))
f = Dense(f_size)(x_in)
dense_center = Dense_with_Center_loss(10)
output = dense_center(f)
model = Model(x_in, output)
model.compile(loss=dense_center.loss,
optimizer='adam',
metrics=['sparse_categorical_accuracy'])
# 这里是y_train是类别的整数id,ä¸ç”¨è½¬ä¸ºone hot
model.fit(x_train, y_train, epochs=10)
è±å¼åè°å¨ #
é¤äºä¿®æ¹æ¨¡åï¼æä»¬è¿å¯è½å¨è®ç»è¿ç¨ä¸åå¾å¤äºæ ï¼æ¯å¦æ¯ä¸ªepochç»æåï¼ç®ä¸ä¸éªè¯éçææ ï¼ä¿åæä¼æ¨¡åï¼è¿æå¯è½å¨å¤å°ä¸ªepochåå°±éä½å¦ä¹ çï¼æè ä¿®æ¹æ£å项忰ï¼ççï¼è¿äºé½å¯ä»¥éè¿åè°å¨æ¥å®ç°ã
åè°å¨å®æ¹é¡µï¼https://keras.io/callbacks/
ä¿åæä¼æ¨¡å #
å¨Kerasä¸ï¼æ ¹æ®éªè¯éçææ æ¥ä¿çæä¼æ¨¡åï¼æç®ä¾¿çæ¹æ³æ¯éè¿èªå¸¦çModelCheckpointï¼æ¯å¦
checkpoint = ModelCheckpoint(filepath='./best_model.weights',
monitor='val_acc',
verbose=1,
save_best_only=True)
model.fit(x_train,
y_train,
epochs=10,
validation_data=(x_test, y_test),
callbacks=[checkpoint])
ç¶èï¼è¿ç§æ¹æ³è½ç¶ç®åï¼ä½æ¯æä¸ä¸ªææ¾ç缺ç¹ï¼å°±æ¯éè¾¹çææ æ¯ç±compileçmetricsæ¥ç¡®å®çï¼èKeresä¸èªå®ä¹ä¸ä¸ªmetricï¼éè¦åæå¼ éè¿ç®æè¡ï¼ä¹å°±æ¯è¯´å¦æä½ ææçææ å¹¶ä¸è½åæå¼ éè¿ç®ï¼æ¯å¦bleuçææ ï¼ï¼é£ä¹å°±æ²¡æ³åæä¸ä¸ªmetric彿°äºï¼ä¹å°±ä¸è½ç¨è¿ä¸ªæ¹æ¡äºã
äºæ¯ï¼ä¸ä¸ªä¸è½çæ¹æ¡å°±åºæ¥äºï¼èªå·±ååè°å¨ï¼ç±ç®ä»ä¹å°±ç®ä»ä¹ãæ¯å¦ï¼
from keras.callbacks import Callback
def evaluate(): # 评测函数
pred = model.predict(x_test)
return np.mean(pred.argmax(axis=1) == y_test) # 爱算啥就算啥
# 定义Callback器,计算验è¯é›†çš„acc,并ä¿å˜æœ€ä¼˜æ¨¡åž‹
class Evaluate(Callback):
def __init__(self):
self.accs = []
self.highest = 0.
def on_epoch_end(self, epoch, logs=None):
acc = evaluate()
self.accs.append(acc)
if acc >= self.highest: # ä¿å˜æœ€ä¼˜æ¨¡åž‹æƒé‡
self.highest = acc
model.save_weights('best_model.weights')
# 爱è¿è¡Œä»€ä¹ˆå°±è¿è¡Œä»€ä¹ˆ
print 'acc: %s, highest: %s' % (acc, self.highest)
evaluator = Evaluate()
model.fit(x_train,
y_train,
epochs=10,
callbacks=[evaluator])
ä¿®æ¹è¶ åæ° #
è®ç»è¿ç¨ä¸è¿æå¯è½å¯¹è¶ åæ°è¿è¡å¾®è°ï¼æ¯å¦æå¸¸è§çä¸ä¸ªéæ±æ¯æ ¹æ®epochæ¥è°æ´å¦ä¹ çï¼è¿å¯ä»¥ç®åå°éè¿LearningRateScheduleræ¥å®ç°ï¼å®ä¹å±äºåè°å¨ä¹ä¸ã
from keras.callbacks import LearningRateScheduler
def lr_schedule(epoch):
# æ ¹æ®epoch返回ä¸åŒçš„å¦ä¹ 率
if epoch < 50:
lr = 1e-2
elif epoch < 80:
lr = 1e-3
else:
lr = 1e-4
return lr
lr_scheduler = LearningRateScheduler(lr_schedule)
model.fit(x_train,
y_train,
epochs=10,
callbacks=[evaluator, lr_scheduler])
妿æ¯å ¶ä»è¶ åæ°å¢ï¼æ¯å¦åé¢center lossçlambï¼æè æ¯ç±»ä¼¼çæ£å项ãè¿ç§æ åµä¸ï¼æä»¬éè¦å°lamb设为ä¸ä¸ªVariableï¼ç¶åèªå®ä¹ä¸ä¸ªåè°å¨æ¥å¨æèµå¼ãæ¯å¦å½åæå®ä¹çä¸ä¸ªlossï¼
def mycrossentropy(y_true, y_pred, e=0.1):
loss1 = K.categorical_crossentropy(y_true, y_pred)
loss2 = K.categorical_crossentropy(K.ones_like(y_pred)/nb_classes, y_pred)
return (1-e)*loss1 + e*loss2
妿è¦å¨ææ¹ååæ°eï¼é£ä¹å¯ä»¥æ¹ä¸º
e = K.variable(0.1)
def mycrossentropy(y_true, y_pred):
loss1 = K.categorical_crossentropy(y_true, y_pred)
loss2 = K.categorical_crossentropy(K.ones_like(y_pred)/nb_classes, y_pred)
return (1-e)*loss1 + e*loss2
model.compile(loss=mycrossentropy,
optimizer='adam')
class callback4e(Callback):
def __init__(self, e):
self.e = e
def on_epoch_end(self, epoch, logs={}):
if epoch >= 100: # 100个epoch之åŽè®¾ä¸º0.01
K.set_value(self.e, 0.01)
model.fit(x_train,
y_train,
epochs=10,
callbacks=[callback4e(e)])
注æCallbackç±»å ±æ¯æå ç§å¨ä¸åé¶æ®µçæ§è¡å½æ°ï¼on_epoch_beginãon_epoch_endãon_batch_beginãon_batch_endãon_train_beginãon_train_endï¼æ¯ä¸ªå½æ°ææ§è¡çé¶æ®µä¸ä¸æ ·ï¼æ ¹æ®ååå¾å®¹æå¤æï¼ï¼å¯ä»¥ç»åèµ·æ¥å®ç°å¾å¤æçåè½ãæ¯å¦warmupï¼å°±æ¯æè®¾å®äºé»è®¤å¦ä¹ çåï¼å¹¶ä¸æ¯ä¸å¼å§å°±ç¨è¿ä¸ªå¦ä¹ çè®ç»ï¼èæ¯å¨åå 个epochä¸ï¼ä»é¶æ ¢æ ¢å¢å å°é»è®¤çå¦ä¹ çï¼è¿ä¸ªè¿ç¨å¯ä»¥ç解为å¨ä¸ºæ¨¡åè°æ´æ´å¥½çåå§åãåè代ç ï¼
class Evaluate(Callback):
def __init__(self):
self.num_passed_batchs = 0
self.warmup_epochs = 10
def on_batch_begin(self, batch, logs=None):
# paramsæ˜¯æ¨¡åž‹è‡ªåŠ¨ä¼ é€’ç»™Callbackçš„ä¸€äº›å‚æ•°
if self.params['steps'] == None:
self.steps_per_epoch = np.ceil(1. * self.params['samples'] / self.params['batch_size'])
else:
self.steps_per_epoch = self.params['steps']
if self.num_passed_batchs < self.steps_per_epoch * self.warmup_epochs:
# å‰10个epochä¸ï¼Œå¦ä¹ çŽ‡çº¿æ€§åœ°ä»Žé›¶å¢žåŠ åˆ°0.001
K.set_value(self.model.optimizer.lr,
0.001 * (self.num_passed_batchs + 1) / self.steps_per_epoch / self.warmup_epochs)
self.num_passed_batchs += 1
Kerasæ éå¯è½ #
Kerasè¿æå¾å¤å¯åå¯ç¹çæå·§ï¼æ¯å¦å¯ä»¥ç´æ¥å©ç¨model.add_lossæ¥çµæ´»å°å¢å lossï¼è¿ææ¨¡ååµå¥è°ç¨ã纯粹ä½ä¸ºtensorflowçç®åä¸å±apiï¼ççï¼å°±ä¸ä¸ä¸æ´çäºï¼æ¬¢è¿æçé®ãæå ´è¶£ç读è çè¨è®¨è®ºã
é常æä»¬è®¤ä¸ºKerasè¿æ ·çé«åº¦å°è£ çåºï¼çµæ´»æ§æ¯æ¯è¾æ¬ 缺çï¼ä½äºå®ä¸ä¸ç¶ãè¦ç¥éï¼Keras并䏿¯ç®åå°è°ç¨tensorflowæè theanoä¸ç°æçä¸å±å½æ°ï¼èä» ä» æ¯éè¿backendæ¥å°è£ äºä¸äºåºæ¬ç彿°ï¼ç¶åæææçä¸è¥¿ï¼åç§å±ãä¼åå¨çï¼ç¨èªå·±çbackendéåäºä¸éï¼ä¹æ£æ¯å¦æ¤ï¼å®æè½æ¯æåæ¢ä¸åçåæ®µã
è½åå°è¿ä¸ªç¨åº¦ï¼Kerasççµæ´»æ§æ¯ä¸å®¹ç½®åçï¼ä½æ¯è¿ç§çµæ´»æ§å¨å¸®å©ææ¡£åæ®éçæ¡ä¾ä¸æ¯è¾é¾ä½ç°ï¼å¾å¤æ¶åè¦é 读æºç ï¼æè½æè§å°Keras飿 ·çåæ³å·²ç»æ 坿åäºãææè§ï¼ç¨Keraså®ç°å¤æç模åï¼æ¢æ¯ä¸ç§ææï¼å忝ä¸ç§èºæ¯åä½ï¼å½ä½ æåæ¶ï¼ä½ å°±ä¼é¶éäºä½ åé åºæ¥çèºæ¯åäºã
转载å°è¯·å
æ¬æ¬æå°åï¼https://kexue.fm/archives/5765
æ´è¯¦ç»ç转载äºå®è¯·åèï¼ãç§å¦ç©ºé´FAQã
妿æ¨è§å¾æ¬æè¿ä¸éï¼æ¬¢è¿å享/æèµæ¬æãæèµå¹¶éè¦ä»ä¸è·å¾æ¶çï¼èæ¯å¸æç¥éç§å¦ç©ºé´è·å¾äºå¤å°è¯»è
ççå¿å
³æ³¨ãå½ç¶ï¼å¦æä½ æ è§å®ï¼ä¹ä¸ä¼å½±åä½ çé
读ã忬¡è¡¨ç¤ºæ¬¢è¿åæè°¢ï¼
妿æ¨éè¦å¼ç¨æ¬æï¼è¯·åèï¼
èåæ. (Aug. 06, 2018). ãâ让Kerasæ´é ·ä¸äºï¼âï¼ç²¾å·§çå±ä¸è±å¼çåè° ã[Blog post]. Retrieved from https://kexue.fm/archives/5765
ä½ ä¹è®¸è¿å¯¹ä¸é¢çå 容æå ´è¶£
Mitchellè¿ä¼¼ï¼ä¹æ³åä¸ºå æ³ï¼è¯¯å·®ä¸è¶ è¿1/9
屿¬¡åè§£ä½ç½®ç¼ç ï¼è®©BERTå¯ä»¥å¤çè¶ é¿ææ¬
乿¥è°è°RNNçæ¢¯åº¦æ¶å¤±/çç¸é®é¢
å¦ä½ååä¸ä¸ªè·æµè¯éæ´æ¥è¿çéªè¯éï¼
åè°ç±»å«ä¸å¹³è¡¡é®é¢ï¼è°èæéä¸éæ¹Lossç对æ¯èç³»
L2æ£åæ²¡ææ³è±¡é£ä¹å¥½ï¼å¯è½æ¯âæé尺度åç§»âæ¹ç祸
æä»¬ççéè¦æè®ç»éçæå¤±éä½å°é¶åï¼
éè¿äºä¿¡æ¯ææ³æ¥ç¼è§£ç±»å«ä¸å¹³è¡¡é®é¢
BERT-of-Theseusï¼åºäºæ¨¡åæ¿æ¢ç模ååç¼©æ¹æ³
线æ§Attentionçæ¢ç´¢ï¼Attentionå¿ é¡»æä¸ªSoftmaxåï¼
|
Bounty: 200
Bounty: 200
Are there any tricks I can employ to get IDEs to offer code completion for dynamically generated class attributes? For instance
class A:
def __init__(self):
setattr(self, "a", 5)
This code will set the class attribute of A called a to the value of 5. But IDEs do not know about a and therefore you do not get code completion for it. I’ve read that the __dir__ method can be hooked, but the suggestion made in that answer has not worked for me. Does anybody have any ideas?
|
演示Django版本为当前最新版本v2.2
当Django配置文件中的INSTALL_APPS包含了django.contrib.auth时,就默认启用了一个简单的权限系统,提供了为用户或组分配权限的方法
之所以说简单呢?主要是因为:
1.默认的权限系统是基于表的控制,权限最小粒度是表
也就是说,假如有一个Blog表,我们可以赋予用户或组对Blog表有delete的权限,那么用户或组成员就可以删除全部Blog,是不能控制用户只能删除自己创建的blog的
2.每个Model模型默认只有四个权限,分别是添加add_、修改change_、删除delete_、查看view_,这些权限记录在Permission表中,表数据如下:
默认权限的创建是通过Django的信号signals实现的,使用了post_migrate信号,在每次执行migrate操作时都会为新的Model模型创建默认权限,关于Django的信号Signals介绍和使用可以查看这篇文章:Django使用Signals监测model字段变化发送通知,
默认的权限名字和描述都是英文的,且只有四个,如果你不想用默认的几个权限,想要自定义的话,可以这样做:
class Blog(models.Model):
title = models.CharField(max_length=256, verbose_name='标题')
content = models.TextField(blank=True, null=True, verbose_name='内容')
class Meta:
default_permissions = ()
permissions = (
("change_blog", "修改博客"),
("delete_blog", "查看博客"),
("publish_blog", "发布博客"),
)
default_permissions: 清空默认的权限
permissions: 设置权限,内容是一个嵌套的列表,列表第一个字段是codename,第二个字段为name
注意:如果你使用了django默认的admin的话,建议保留4个默认权限,可以添加新权限
如果你用了Django自带的admin,在migrate之后就能在admin的user和group两个表中看到新添加的权限了
当然你也可以在程序中来添加或修改权限
ops = User.objects.get(id=2)
ops.user_permissions.add(25, 26)
ops.user_permissions.set([26, 27])
ops.user_permissions.remove(26, 27)
ops.user_permissions.clear()
coffee = Group.objects.get(id=1)
coffee.permissions.add(25)
coffee.permissions.set([26,27])
coffee.permissions.remove(25)
coffee.permissions.clear()
其中add为添加,set为设置,remove为移除,clear为清空,add跟set的区别是add会在原有权限的基础上加新权限,而set会清空原有权限设置成新的权限,后边的参数25,26,27可以为Permission的ID或者是Permission对象,例如这样也是可以的:
p = Permission.objects.get(id=25)
coffee.permissions.add(p)
给组赋予权限,组内的所有用户会自动的拥有该组的权限,例如用户ops-coffee隶属于组SRE,SRE组对Blog表有修改权限,那么即便是没有单独给ops-coffee用户分配任何权限,他也会有对Blog表的修改权限
get_all_permissions()列出用户的所有权限:
>>> User.objects.get(username='ops-coffee').get_all_permissions()
{'blog.publish_blog', 'blog.delete_blog', 'auth.add_group', 'blog.change_blog'}
get_group_permissions()列出用户所属组的权限:
>>> User.objects.get(username='ops-coffee').get_group_permissions()
{'blog.publish_blog', 'blog.change_blog', 'blog.delete_blog'}
用户对象可以通过has_perm方法来判断用户是否拥有某个权限:
>>> User.objects.get(username='ops-coffee').has_perm('blog.change_blog')
True
>>> User.objects.get(username='ops-coffee').has_perm('blog.delete_blog')
True
has_perm 的参数由<app label>.<permission codename>两部分组成,例如blog.delete_blog表示的就是名字为blog的APP下的delete_blog权限
可以直接在view中通过if判断用户权限,例如:
def ops_coffee_view(request):
if not request.user.has_perm('blog.change_blog')
return HttpResponse('403 Forbidden')
为了方便,Django还提供了一个permission_required()的装饰器,可以快速的来校验用户是否拥有特定的权限,用法如下:
@permission_required(perm, login_url=None, raise_exception=False)
三个参数的意思分别是:
perm: 必须有,权限名称,同has_perm一样
login_url: 非必须,登陆的url地址,当你没有权限时自动跳转到登陆页,这里可以设置登陆地址的url
reise_exception: 非必须,当为True时,如果用户没有权限,则不会跳转到登陆页,而是引发PermissionDenied错误,返回403 Forbidden
如下例子,判断用户是否有blog的APP的change_blog权限,如果没有则返回403错误
@permission_required('blog.change_blog', raise_exception=True)
def ops_coffee_view(request):
...
当前登陆用户的权限保存在模版变量{{ perms }}中,可以在模版中通过if判断用户是否拥有相应的权限而开放对应的内容,例如对于侧边栏菜单只显示用户有权限访问的,就可以这么写:
{% if perms.cmdb.view_project %}
<li><a href="{% url 'project-list-url' %}"></i> 项目列表</a></li>
{% endif %}
{% if perms.cmdb.view_service %}
<li><a href="{% url 'service-list-url' %}"></i> 服务列表</a></li>
{% endif %}
{% if perms.cmdb.view_environment %}
<li><a href="{% url 'environment-list-url' %}"></i> 环境列表</a></li>
{% endif %}
至此,Django的默认权限系统介绍完成,默认权限在小型项目中能满足大部分的需求,如果对权限控制有更高的要求可以关注前文中介绍的django-guardian项目或自己实现
|
models.py:
class Person(models.Model):
name = models.CharField(max_length=200)
CATEGORY_CHOICES = (
('M', 'Male'),
('F', 'Female'),
)
gender = models.CharField(max_length=200, choices=CATEGORY_CHOICES)
to_be_listed = models.BooleanField(default=True)
description = models.CharField(max_length=20000, blank=True)
views.py:
def index(request):
latest_person_list2 = Person.objects.filter(to_be_listed=True)
return object_list(request, template_name='polls/schol.html',
queryset=latest_person_list, paginate_by=5)
On the template, when I call person.gender, I get 'M' or 'F' instead of 'Male' or 'Female'.
How to display the value ('Male' or 'Female') instead of the code ('M'/'F')?
It looks like you were on the right track - get_FOO_display() is most certainly what you want:
In templates, you don't include () in the name of a method. Do the following:
{{ person.get_gender_display }}
|
Making your own programming language with Python
Making your own programming language with Python
Why make your own language?
When you write your own programming language, you control the entire programmer experience.
This allows you to shape exact how each aspect of your language works and how a developer interacts with it.
This allows you to make a language with things you like from other languages and none of the stuff you don't.
In addition, learning about programming language internals can help you better understand the internals of programming languages you use every day, which can make you a better programmer.
How programming languages work
Every programming language is different in the way it runs, but many consist of a couple fundamental steps: lexing and parsing.
Introduction to Lexing
Lexing is short for LEXical analysis.
The lex step is where the language takes the raw code you've written and converts it into an easily parsable structure.
This step interprets the syntax of your language and turns next into special symbols inside the language called tokens.
For example, let's say you have some code you want to parse. To keep it simple I'll use python-like syntax, but could be anything. It doesn't even have to be text.
# this is a commenta = (1 + 1)
A lexer to parse this code might do the following:
Discard all comments
Produce a token that represents a variable name
Produce left and right parenthesis tokens
Convert literals like numbers or strings to tokens
Produce tokens for nath operations like + - * /(and maybe bitwise/logical operators as well)
The lexer will take the raw code and interpret it into a list of tokens.
The lexer can also be used to insure that two pieces of code that may be different, like 1 + 1 and 1+1 are still parsed the same way.
For the code above, it might generate tokens like this:
NAME(a) EQUALS LPAREN NUMBER(1) PLUS NUMBER(2) RPAREN
Tokens can be in many forms, but the main idea here is that they are a standard and easy to parse way of representing the code.
Introduction to Parsing
The parser is the next step in the running of your language.
Now that the lexer has turned the text into consistent tokens, the parser simplifies and executes them.
Parser rules recognize a sequence of tokens and do something about them.
Let's look at a simple example for a parser with the same tokens as above.
A simple parser could just say:
If I see the GREETtoken and then aNAMEtoken, printHello,and the the name.
A more complicated parser aiming to parse the code above might have these rules, which we will explore later:
Try to classify as much code as possible as an expression. By "as much code as possible" I mean the parser will first try to consider a full mathematical operation as an expression, and then if that fails convert a single variable or number to an expression. This ensure that as much code as possible will be matched as an expression. The "expression" concept allows us to catch many patterns of tokens with one piece of code. We will use the expression in the next step.
Now that we have a concept of an expression, we can tell the parser that if it sees the tokens NAME EQUALSand then an expression, that means a variable is being assigned.
Using PLY to write your language
What is PLY?
Now that we know the basics of lexing and parsing, lets start writing some python code to do it.
PLY stands for Python Lex Yacc.
It is a library you can use to make your own programming language with python.
Lex is a well known library for writing lexers.
Yacc stands for "Yet Another Compiler Compiler" which means it compiles new languages, which are compilers themself.
This tutorial is a short example, but the PLY documentation is an amazing resource with tons of examples. I would highly recommend that you check it out if you are using PLY.
For this example, we are going to be building a simple calculator with variables. If you want to see the fully completed example, you can fork this repl: [TODO!!]
Lexing with PLY lex
Lexer tokens
Lets start our example! Fire up a new python repl and follow along with the code samples.
To start off, we need to import PLY:
from ply import lex, yacc
Now let's define our first token. PLY requires you to have a tokens list which contains every token the lexer can produce. Let's define our first token, PLUS for the plus sign:
tokens = [
'PLUS',
]
t_PLUS = r'\+'
A string that looks like r'' is special in python. The r prefix means "raw" which includes backslashes in the string. For example, to make define the string \+ in python, you could either do '\\+' or r'\+'. We are going to be using a lot of backslashes, so raw strings make things a lot easier.
But what does \+ mean?
Well in the lexer, tokens are mainly parsed using regexes.
A regex is like a special programming language specifically for matching patterns in text.
A great resource for regexes is regex101.com where you can test your regexes with syntax highlighting and see explanations of each part.
I'm going to explain the regexes included in this tutorial, but if you want to learn more you can play around with regex101 or read one of the many good regex tutorials on the internet.
The regex \+ means "match a single character +".
We have to put a backshlash before it because + normally has a special meaning in regex so we have to "escape" it to show we want to match a + literally.
We are also required to define a function that runs when the lexer encounters an error:
def t_error(t):
print(f"Illegal character {t.value[0]!r}")
t.lexer.skip(1)
This function just prints out a warning when it hits a character it doesn't recognize and then skips it (the !r means repr so it will print out quotes around the character).
You can change this to be whatever you want in your language though.
Optionally, you can define a newline token which isn't produced in the output of the lexer, but keeps track of each line.
def t_newline(t):
r'\n+'
t.lexer.lineno += len(t.value)
Since this token is a function, we can define the regex in docstring of the function instead.
The function takes a paramater t, which is a special object representing the match that the lexer found. We can access the lexer using the the t.lexer attribute.
This function matches at least one newline character and then increases the line number by the amount that it sees. This allows the lexer to known what line number its on at all times using the lexer.lineno variable.
Now we can use the line number in our error function:
def t_error(t):
print(f"Illegal character {t.value[0]!r} on line {t.lexer.lineno}")
t.lexer.skip(1)
Let's test out the lexer!
This is just some temporary code, you don't have to know what this code does, because once we implement a parser, the parser will run the lexer for you.
lexer = lex.lex()
lexer.input('+')
for token in lexer:
print(token)
Play around with the value passed to lex.input.
You should notice that any character other than a plus sign makes the error message print out, but doesn't crash the program.
In your language, you can make it gracefully ignore lex errors like this or make it stop running by editing the t_error function.
If you add more lines to the input string, the line number in the error message should change.
More complicated tokens
Let's delete the test token add some more complicated tokens.
Replace your tokens list and the t_PLUS line with the following code:
reserved_tokens = {
'greet': 'GREET'
}
tokens = list(reserved_tokens.values()) + [
'SPACE'
]
t_SPACE = r'[ ]'
def t_ID(t):
r'[a-zA-Z_][a-zA-Z0-9_]*'
if t.value in reserved_tokens:
t.type = reserved_tokens[t.value]
else:
t.type = 'NAME'
return t
Let's explore the regex we have in the t_ID function.
This regex is more complicated that the simple ones we've used before.
First, we have [a-zA-Z_]. This is a character class in regex. It means, match any lowercase letter, uppercase letter, or underscore.
Next we have [a-zA-Z0-9_]. This is the same as above except numbers are also included.
Finally, we have *. This means "repeat the previous group or class zero to unlimited times".
Why do we structure the regex like this?
Having two separate classes makes sure that the first one must match for it to be a valid variable.
If we exclude numbers from the first class, it not only doesn't match just regular numbers, but makes sure you can't start a variable with a number.
You can still have numbers in the variable name, because they are matched by the second class of the regex.
In the code, we first have a dictionary of reserved names.
This is a mapping of patterns to the token type that they should be.
The only one we have says that greet should be mapped to the GREET token.
The code that sets up the tokens list takes all of the possible reserved token values, in this is example its just ['GREET'] and adds on ['SPACE'], giving us ['GREET', 'SPACE'] automatically!
But why do we have to do this? Couldn't we just use something like the following code?
# Don't use this code! It doesn't work!
t_GREET = r'greet'
t_SPACE = r'[ ]'
t_NAME = r'[a-zA-Z_][a-zA-Z0-9_]*'
Actually, if we used that code, greet would never be matched! The lexer would match it with the NAME token. In order to avoid this, we define a new type of token which is a function. This function has the regex as its docstring and is passed a t paramater. This paramater has a value attribute which is the pattern matched.
The code inside this function simply checks if this value is one of the special reserved names we defined before. If it is, we set the special type attribute of the t paramter. This type controls the type of token which is produced from the pattern. When it sees the name greet, it will see greet is in the reserved names dictionary and produce a token type of GREET because that is the corresponding value in the dictionary. Otherwise, it will produce a NAME token because this is a regular variable.
This allows you to add more reserved terms easily later, its as simple as adding a value to the dictionary.
If needed, you could make also make the key of the reserved names dicitonary a regex and the match each regex against t.value in the function.
If you want to change these rules for your language, feel free!
Parsing with PLY yacc
Fair warning: Yacc can sometimes be hard to use and debug, every if you know python well.
Keep in mind, you don't have to use both lex and yacc, if you want you can just use lex and then write your own code to parse the tokens.
With that said lets get started.
Yacc basics
Before we get started, delete the lexer testing code (everything from lexer.input onward).
When we run the parser, the lexer is automatially run.
Let's add our first parser rule!
def p_hello(t):
'statement : GREET SPACE NAME'
print(list(t))
print(f"Hello, {t[3]}")
Let's break this down.
Again, we have information on the rule in the docstring.
This information is called a BNF Grammar. A statement in BNF Grammar consists of a grammar rule known as a non-terminal and terminals.
In the example above, statement is the non-terminal and GREET SPACE NAME are terminals.
The left-hand side describes what is produced by the rule, and the right-hand side describes what matches the rule.
The right hand side can also have non-terminals in it, just be careful to avoid infinite loops.
Basically, the yacc parser works by pushing tokens onto a stack, and looking at the current stack and the next token and seeing if they match any rules that it can use to simplify them. Here is a more in-depth explanation and example.
Before the above example can run, we still have to add some more code.
Just like for the lexer, the error handler is required:
def p_error(t):
if t is None: # lexer error, already handled
return
print(f"Syntax Error: {t.value!r}")
Now let's create and run the parser:
parser = yacc.yacc()
parser.parse('greet replit')
If you run this code you should see:
[None, 'greet', ' ', 'replit']
Hello, replit
The first line is the list version of the object passed to the parser function.
The first value is the statement that will be produced from the function, so it is None.
Next, we have the values of the tokens we specified in the rule.
This is where the t[3] part comes from. This is the third item in the array, which is the NAME token, so our parser prints out Hello, replit!
Note: Creating the parser tables is a relatively expensive operation, so the parser creates a file called
parsetab.pywhich it can load the parse tables from if they haven't changed.
You can change this filename by passing a kwarg into theyaccinitialization, likeparser = yacc.yacc(tabmodule='fooparsetab')
More complicated parsing: Calculator
This example is different from our running example, so I will just show a full code example and explain it.
from ply import lex, yacc
tokens = (
'NUMBER',
'PLUS', 'MINUS', 'TIMES', 'DIVIDE',
'LPAREN', 'RPAREN',
)
t_PLUS = r'\+'
t_MINUS = r'-'
t_TIMES = r'\*'
t_DIVIDE = r'/'
t_LPAREN = r'\('
t_RPAREN = r'\)'
def t_NUMBER(t):
r'\d+'
try:
t.value = int(t.value)
except ValueError:
print(f"Integer value too large: {t.value}")
t.value = 0
return t
def t_newline(t):
r'\n+'
t.lexer.lineno += len(t.value)
def t_error(t):
print(f"Illegal character {t.value[0]!r} on line {t.lexer.lineno}")
t.lexer.skip(1)
t_ignore = ' \t'
lexer = lex.lex()
# Parsing
def p_expression_binop(t):
'''expression : expression PLUS expression
| expression MINUS expression
| expression TIMES expression
| expression DIVIDE expression'''
if t[2] == '+' : t[0] = t[1] + t[3]
elif t[2] == '-': t[0] = t[1] - t[3]
elif t[2] == '*': t[0] = t[1] * t[3]
elif t[2] == '/': t[0] = t[1] / t[3]
def p_expression_group(t):
'expression : LPAREN expression RPAREN'
t[0] = t[2]
def p_expression_number(t):
'expression : NUMBER'
t[0] = t[1]
def p_error(t):
if t is None: # lexer error
return
print(f"Syntax Error: {t.value!r}")
parser = yacc.yacc()
if __name__ == "__main__":
while True:
inp = input("> ")
print(parser.parse(inp))
First we start off with the tokens: numbers, mathematical operations, and parenthesis.
You might notice that I didn't use the reserved_tokens trick, but you can implement it if you want.
Next we have a simple number token which matches 0-9 with \d+ and then converts its value from a string to an integer.
The next code we haven't used before is t_ignore.
This variable represents a list of all characters the lexer should ignore, which is \t which means spaces and tabs.
When the lexer sees these, it will just skip them. This allows users to add spaces without it affecting the lexer.
Now we have 3 parser directives.
The first is a large one, producing an expression from 4 possible input values, one for each math operation.
Each input has an expression on either side of the math operator.
Inside this directive, we have some (pretty ugly) code that performs the correct operation based on the operation token given.
If you want to make this prettier, consider a dictionary using the python stdlib operator module.
Next, we define an expression with parenthesis around it as being the same as the expression inside.
This makes parenthesis value be substituted in for them, making them evaluate inside first.
With very little code we created a very complicated rule that can deal with nested parenthesis correctly.
Finally, we define a number as being able to be an expression, which allows a number to be used as one of the expressions in rule 1.
For a challenge, try adding variables into this calculator!
You should be able to set variables by using syntax likevarname = any_expressionand you should be able to use variables in expressions.
If you're stuck, see one solution from the PLY docs.
Thats it!
Thanks for reading! If you have questions, feel free to ask on the Replit discord's #help-and-reviews channel, or just the comments.
Have fun!
|
*********************
针对上边的power函数,如果大多数情况只需计算平方,就可以使用默认参数,对power函数做如下修改:
def power(x, y=2):
return x**y
此时调用power函数时,如果不提供y参数,则默认计算x的平方:
>>>power(3)
9
>>>power(3, 3)
27
注意默认参数需放在必选参数后边,通过默认参数,可以简化函数的调用。
Python中,定义一个函数要使用def语句,依次写出函数名、括号、括号中的参数(如果需要参数)和冒号:,然后在缩进块中(缩进四个空格)编写函数体,有返回值的话用return语句返回。将程序的大部分繁重工作移到函数中去完成,从而简化主程序。
一个简单的函数如下:
def test(name):
"""打印问候语"""
print('hello ' name)
return name
用三引号括起来的文本是文档字符串,Python用它们生成生成函数的文档。
还可以定义空函数:
def test():
pass
(5)以 return[返回值] 来结束函数,并返回一个值给调用方。若函数没有具体的返回值,则return会返回 None。
def power(x, y):
result = 1
for i in range(y):
result *= x
return result
print(power(2, 3))
def power(x, y=2):
return x**y
对于函数调用中的关键字实参,也应遵循这种约定:
>>>power(3, y=4)
5.2 返回值的作用
5. 请问调用以下这个函数会打印什么内容?
调用函数时,Python要将函数调用中的每个实参关联到函数定义中对应的形参,这种关联方式称为位置参数。
def power(x, y):
"""计算x的y次方"""
return x**y
x、y就是位置参数,调用power函数时必须传入对应的x、y:
>>>power(2, 4)
16
第12行,调用函数test_func(),提供三个实参:1、2、3。
*******************************
6.1.1 导入整个模块
因为当Python执行到return语句的时候,Python认为函数到此结束,需要返回了(尽管没有任何返回值)。
关键字参数允许你传入任意个key-value,这些key-value在函数内部自动组装为一个dict。
如果使用关键字参数,不用考虑函数调用中的实参顺序问题:
def person(first_name, last_name):
return first_name ' ' last_name
>>>person(first_name='zhang', last_name='san')
zhang san
>>>person(last_name='san', first_name='zhang')
zhang san
另外,要传递多个key-value如何做呢?
def person(**person_info):
print(person_info)
# 传入多个key-value
>>>person(name='tom', age=10)
{'name': 'tom', 'age': 10}
# 传入dict
>>>info = {'name': 'tom', 'age': 10}
>>>person(**info)
{'name': 'tom', 'age': 10}
可以发现,我们传入的关键字参数名字没有任何限制,当然可以通过添加条件判断来实现,但是麻烦,命名关键字参数很好的解决了这个问题:
def person(name, *, age, job):
print(name, age, job)
注意形参列表的星号,星号后的形参就是命名关键字参数,如果传入的关键字参数的名字不在其中,就会报错:
>>>person('tom', age=20, job='it')
tom 20 it
>>>person('tom', age=20, job='it', sex='boy')
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: person() got an unexpected keyword argument 'sex'
第1行,从模块max_min_num中导入求最大值和求最小值的函数,并分别给它们取一个别名。
四、课时17课后习题及答案
函数并非总是直接显示输出,它可以处理一些数据,并返回一个或一组值。
Python的返回值比较特殊,除了返回单个基本类型数据、列表、字典、set外,还可以返回多个值,例如:
def test():
return 1, 2, 3, 4
>>>test()
(1, 2, 3, 4)
可以看到,Python的函数返回多值其实就是返回一个tuple。
二、函数的参数
顾名思义就是传入的参数个数是可变的,这点和关键字参数类似。
def get_sum(*numbers):
return sum(numbers)
# 传入多个实参
>>>get_sum(1, 2, 3, 4)
10
# 传入list
>>>l = [1, 2, 3, 4]
>>>get_sum(*l)
10
# 传入tuple
>>>t = (1, 2, 3, 4])
>>>get_sum(*t)
10
第2行,用文档字符串来注释说明该函数的功能。
0. 编写一个函数power()模拟内建函数pow(),即power(x,y)为计算并返回x的y次幂的值。
The max is 20 .
def myFristFunction():
print("DC love ZWW")
print("1314520")
0. 你有听说过DRY吗?
>>> def add(name1,name2):
print(name1 " love " name2)
>>> add("DC","ZWW")
DC love ZWW
2. 函数可以有多个参数吗?
第1行,用def 关键字定义一个函数greet_user(),并以冒号结尾。
测试题:
在Python中,函数是逻辑结构化和过程化的一种方法;是带名字的、组织好的、可重复使用的代码块,用于完成具体的任务。Python用def关键字来定义函数,然后用return关键字返回值,其语法格式如下:
4. 请问这个函数有多少个参数?
四、课时17课后习题及答案
运行结果:
2.编写一个将十进制转换为二进制的函数,要求采用“除2取余”(脑补链接)的方式,结果与调用bin()一样返回字符串形式。
例如,调用模块max_min_num中的求最大值和最小值的函数。
1. 编写一个函数,利用欧几里得算法(脑补链接)求最大公约数,例如gcd(x,y)返回值为参数x和参数y的最大公约数。
第6行,调用该函数greet_user()。由于该函数没有参数,所以直接用函数名加括号()调用即可。
******************
第11行,调用函数test_func(),并按位置指定其参数值,即x=4,y=5,z=6。
一、创建和调用函数
第3行,创建一个空的列表。
import 模块名
def MyFun():
# 我是函数体
# 我也是函数体
# 我们都属于函数MyFun()
# 噢,我不属于MyFun()函数的了
代码:
函数的调用和运行机制:当函数myFristFunction()发生调用操作的时候,Python会自动往上找到defmyFristFunction()的定义过程,倘若没找到就会报错。然后依此执行该函数所包含的代码块部分(也就是冒号后缩进的那部分内容)。只需要一条语句,就可以实现函数内的所用功能。
函数名称([参数1],[参数2],...,[参数N])
>>> def MyFun(x, y):
return x[0] * x[1] - y[0] * y[1]
>>> MyFun((3, 4), (1, 2))
10
from 模块名 import [函数名1], [函数名2],..., [函数名N]
解:
代码:
一、创建和调用函数
答:
运行结果:
代码:
假如,我想把刚才的内容打印3次,我只需要调用3次函数即可:
例如,我们创建将1.6.3中的代码修改一下,值保持函数代码部分,作为一个球任意两数的最大者的模块max_num,然后在一个调用程序test_max.py使用该模块。
二、函数的参数
第4行,用关键字return 结束函数。
我们分析下,函数的参数需要的是变量,而这里你试图用“元祖”的形式来传递是不可行的。 我想你如果这么写,你应该是要表达这么个意思:
例如,创建一个函数具有三个形参x、y、z,其中z的默认值为0,然后调用该函数,并打印的值。
*******************************
第14行,调用函数test_func(),提供四个实参:1、2、3、4。
有时候,我们无法预先知道函数需要接受多少个关键字实参,因此,我们可以使用‘**kwargs’定义形参, Python函数会从调用语句中收集任意数量的关键字实参进行处理。
>>> hello()
Hello World!
使用多个参数的时候,只需要用逗号隔开:
在Python中,函数的的作用主要有三点:
例如,将求任意两个数的最大值和最小值的函数放到一个模块max_min_num的模块中,然后调用其中求最小值的函数。
0) 可以降低代码量(调用函数只需要一行,而拷贝黏贴需要N倍代码)1) 可以降低维护成本(函数只需修改def部分内容,而拷贝黏贴则需要每一处出现的地方都作修改)2) 使序更容易阅读(没有人会希望看到一个程序重复一万行“I love FishC.com”)
因此,函数调用存在两种情况,一种是无参函数调用;一种是有参函数调用。
那究竟可以多少个参数呢?理论上想要多少个就有多少个。
三、函数的返回值
从以上运行结果可知,跟1.4.1.1中的一致。
def MyFun((x, y), (a, b)):
return x * y - a * b
第6行,调用函数test_func(),不提供任何实参值。
答:DRY是程序员们公认的指导原则:Don'tRepeat Yourself.
快快武装你的思维吧,拿起函数,不要再去重复拷贝一段代码了
第3行,打印一条登录时的欢迎问候语。
为了使得程序得代码变得简单,就需要把程序分解成较小得组成部分。有三种方法可以实现:函数、对象、模块。
(3)可与其他程序员共享这些文件而不是整个程序。
>>> def hello():
print('Hello World!')
return
print('Welcome To FishC.com!')
1 ---第一次调用---2 13 34 35 ---第二次调用---6 47 58 6
>>> for each in range(3):
myFristFunction()
DC love ZWW
1314520
DC love ZWW
1314520
DC love ZWW
1314520
解:
在Python中,参数也分为实参和形参。实参就是调用函数时,在括号中指定的具有实际值的参数;形参就是在定义函数时,在括号中指定的变量,无实际值。 其中,实参包括:位置实参、关键字实参、默认值等。
三、函数的返回值
1 # 有参函数调
2 def greet_user(username):
3 """用户登录时,显示简单的问候语"""
4 print("Welcome to ",username,"!")
5 return
6
7 #有参函数调用
8 greet_user("Yun")
我们的函数要有返回值,只需要在函数中使用关键字return,后面就跟着要返回的值。
运行结果:
注意:在函数的后面要加上一对小括号哦。这小括号是必不可少的。接下来是函数的调用:
说明:
解:
运行结果:
3. 创建函数使用什么关键字,要注意什么?
因此,可简单的说明,函数即由函数名称及函数体组成。其中函数体包括:文档字符串、代码块、返回值。
def gcd(x, y):
while y:
t = x % y
x = y
y = t
return x
print(gcd(4, 6))
(3)可扩展性。当我们需要让能够函数帮助我们完成更多的任务,我们只需在函数中编写实现即可。若参数有变化,则只需修改调用的地方。
def Dec2Bin(dec):
temp = []
result = ''
while dec:
quo = dec % 2
dec = dec // 2
temp.append(quo)
while temp:
result = str(temp.pop())
return result
print(Dec2Bin(62))
运行结果:
******************
(4)代码块就是实现函数功能的具体的代码。
答:会打印:
5.4 返回一个列表
>>> myFristFunction()
DC love ZWW
1314520
由于函数定义中可能包含多个形参,因此有参函数的调用也可能包含多个实参。 调用函数时,给函数传递实参的方式有:位置实参、关键字实参、默认、还可能是列表和字典等等。
使用默认值的好处:
***********************
例如,定义一个欢迎用户登录时的问候语的函数,并调用它。
*********************
5.1 返回值的定义
目录:
(2)可清楚指出函数的典型用法
>>> def mySecondFunction(name):
print(name "爱ZWW")
>>> mySecondFunction("DC")
DC爱ZWW
>>> mySecondFunction("dc")
dc爱ZWW
使用函数:
函数的调用很简单,即用函数名称加圆括号(),若有参数,则将其参数放在括号中,若有多个参数,则将其放到括号中,并用逗号分开。具体语法格式如下所示:
>>> def add(name1,name2):
print(name1 " love " name2)
return "LOVE"
>>> add("DC","ZWW")
DC love ZWW
'LOVE'
>>> def add(name1,name2):
print(name1 " love " name2)
return name1
>>> add("DC","ZWW")
DC love ZWW
'DC'
(2)当混合使用关键字实参和位置实参时,位置实参只能位于关键字实参的前面。
答:使用“def”关键字,要注意函数名后边要加上小括号“()”,然后小括号后边是冒号“:”,然后缩进部分均属于函数体的内容,例如:
1. 函数的定义
from 模块名 import *
动动手:
"""文档字符串"""
代码:
括号里放的就是函数的参数。参数就是使得函数可以实现个性化:
第16行,调用函数test_func(),提供五个实参:1、2、3、4、5。
此前接触的BIF就是Python帮我们封装好的函数。在Python中创建一个函数用def关键字。
模块中的函数:
***********************
代码:
答:可以的,理论上你想要有多少个就可以有多少个,只不过如果函数的参数过多,在调用的时候出错的机率就会大大提高,因而写这个函数的程序员也会被相应的问候祖宗,所以,尽量精简吧,在Python的世界里,精简才是王道!
def 函数名称([参数1],[参数2],...,[参数N]):
答:如果你回答两个,那么恭喜你错啦,答案是0,因为类似于这样的写法是错误的!
从以上运行结果可知,与1.4.1.1中的一致。
1.都是重复一段代码,为什么我要使用函数(而不使用简单的拷贝黏贴)呢?
1 The min is 18 .2 The max is 26 .
1 ---第一次调用---
2 {}
3 ---第二次调用---
4 {'x': 1}
5 ---第三次调用---
6 {'x': 1, 'y': 2}
7 ---第四次调用---
8 {'x': 1, 'y': 2, 'z': 3}
9 ---第五次调用---
10 {'x': 1, 'y': 2, 'z': 3, 'x1': 4}
11 ---第六次调用---
12 {'x': 1, 'y': 2, 'z': 3, 'x1': 4, 'y1': 5}
说明:
第2行,打印该形参"**kwargs”的值。
(2)可在众多不同的程序中重用函数。
说明:
1 from max_min_num import test_min as t_min,test_max as t_max
2
3 min = t_min(20,18)
4 print("The min is",min,".")
5
6 max = t_max(20,26)
7 print("The max is",max,".")
1 def test_func(*args):
2 print(args)
3 return
4
5 print("---第一次调用---")
6 test_func()
7 print("---第二次调用---")
8 test_func(1)
9 print("---第三次调用---")
10 test_func(1,2)
11 print("---第四次调用---")
12 test_func(1,2,3)
13 print("---第五次调用---")
14 test_func(1,2,3,4)
15 print("---第六次调用---")
16 test_func(1,2,3,4.5)
(1)调用有默认值的函数时,如果没有指定实参,那么形参将使用自身的默认值,反之,则使用指定的实参。
代码:
例如,定义一个有三个形参的函数,并调用它。
(2)函数的内容以冒号起始,并且换行后需缩进。
4.2 有参函数的调用
运行结果:
第2行,用def 关键字定义一个带有形参username的函数greet_user(),并以冒号结尾。
(4)使用函数让程序更容易阅读。
Welcome to login!
4.2.3 默认值
1 def test_func(list_nums):
2 """接收一个列表,返回奇数组成的列表"""
3 list_nums_new = []
4 list_nums_bak = list_nums[:]
5 while list_nums_bak:
6 list_num = list_nums_bak.pop()
7 if list_num % 2 == 0 :
8 pass
9 else:
10 list_nums_new.append(list_num)
11 return list_nums_new
12
13 list = test_func([0,1,2,3,4,5,6,7,8,9])
14 print(list)
关键字实参是传递给函数的名称-值对,即每个实参都由变量和值组成。由于可以直接将实参中名称和值关联起来,因此向函数传递实参时就不会混淆,调用函数时不仅不用考虑实参的顺序,还能清楚的指出函数调用中每个值的用途。但是,使用关键这参数时,必须准确的指定函数定义中的形参名。
代码:
(1)函数名称应具有描述性,且只使用小写字母和下划线来命名。
(2)每个函数都应包含文档字符串,即简要地阐述该函数的功能的注释,该注释应紧跟在函数定义后面。
(3)给形参指定默认值时,等号两边不要有空格。
(4)对于函数调用中的关键字实参,也应遵循等号两边不要有空格的约定。
(5)PEP8(
(6)如果程序或模块包含多个函数,可使用两个空行将相邻的函数分开。
(7)所有的 import语句都应放在文件开头,唯一例外的情形是,在文件开头使用了注释来描述整个程序。
1 ---第一次调用---2 13 24 35 ---第二次调用---6 47 58 6
1 def test_func(dict_nums):
2 """接收一个列表,返回值为奇数组成的字典"""
3 dict_nums_new = {}
4 for key,vlaue in dict_nums.items():
5 if vlaue % 2 == 0 :
6 pass
7 else:
8 dict_nums_new[key] = vlaue
9 return dict_nums_new
10
11 dict = test_func({'a':0,'b':1,'c':2,'d':3,'e':4,'f':5})
12 print(dict)
4.2.2 关键字实参
说明:
代码:
(2)有多个确认个数的位置实参
在Python中,模块就是扩展名为.py的文件,它包含要导入到程序中的代码。
(1)当我们用关键字参数调用函数时,必须每个实参都需指定其关联的形参名。
第2行,将接收到的参数以列表的形式输出。
1 def greet_user():
2 """用户登录时,显示简单的问候语"""
3 print("Welcome to login!")
4 return
5
6 greet_user()
第1行,用关键字def定义了一个参数个数不确定的函数test_func(),其中“*args”表示形参个数不确定。
{'b': 1, 'd': 3, 'f': 5}
从以上结果可知,使用函数别名和使用函数本身是一样的效果。
第5行,使用while循环列表副本。
模块:
(1)可隐藏程序代码的细节。
4. 函数的调用
4.2.1 位置实参
代码:
1 from max_min_num import *
2
3 min = test_min(20,18)
4 print("The min is",min,".")
5
6 max = test_max(20,26)
7 print("The max is",max,".")
5.3 返回一个简单值
1 def test_func(x,y,z):
2 """接受三个参数值,并打印它们"""
3 print(x)
4 print(y)
5 print(z)
6 return
7
8 print("---第一次调用---")
9 test_func(1,2,3)
10 print("---第二次调用---")
11 test_func(4,5,6)
例如,创建一个函数接受一个列表,然后将奇数组成一个新的列表作为返回值。
第1行,用def关键字定义一个具有x,y,z三个形参的函数test_func()。
1 def test_max(x,y):
2 """判断数字大小,返回最大值"""
3 if x > y:
4 max_num = x
5 else:
6 max_num = y
7 return max_num
8
9 def test_min(x,y):
10 """判断数字大小,返回最大值"""
11 if x < y:
12 min_num = x
13 else:
14 min_num = y
15 return min_num
1 from max_min_num import test_min
2
3 min = test_min(20,18)
4 print("The min is",min,".")
第7行,调用函数greet_user()时,使用关键字实参来给函数传值。
6.1 模块的导入
运行结果:
from 模块名 import 函数名
第1行,使用import导入模块max_num。
[9, 7, 5, 3, 1]
6.1.3 导入所有的函数
第4~8行,使用for语句循环接受的字典,然后判断该值是否为奇数,如果不是,则跳过;反之,则增加到空字典中。
说明:
说明:
(3)有多个不确认个数的位置实参
那么,假如我们不指定实参z的值,那结果如何呢?
第4行,创建一个接受到的列表的副本。
说明:
第9行,调用函数test_func(),并按位置指定其参数值,即x=1,y=2,z=3。
运行结果:
1 ---第一次调用--- 2 1 3 2 4 0 5 ---第二次调用--- 6 1 7 2 8 0 9 ---第三次调用---10 111 212 013 ---第四次调用---14 115 216 317 ---第五次调用---18 119 220 321 ---第六次调用---22 123 224 3
1 def test_func(**kwargs):
2 print(kwargs)
3 return
4
5 print("---第一次调用---")
6 test_func()
7 print("---第二次调用---")
8 test_func(x=1)
9 print("---第三次调用---")
10 test_func(x=1,y=2)
11 print("---第四次调用---")
12 test_func(x=1,y=2,z=3)
13 print("---第五次调用---")
14 test_func(x=1,y=2,z=3,x1=4)
15 print("---第六次调用---")
16 test_func(x=1,y=2,z=3,x1=4,y1=5)
6. 将函数存储在模块中
The maximum is 18 .
从以上运行结果可知,当我们不确认函数有多少确认的位置实参时,可使用“*args”作为形参,然后会把每次调用时传入的位置实参值以列表的形式当做参数传递。这个位置实参可以没有人值,或者有多个值。
1 import max_num
2
3 max = max_num.test_max(20,18)
4 print("The max is",max,".")
4.1 无参函数的调用
(1)只有一个位置实参
例如,定义一个欢迎用户登录时的问候语的函数,根据不同用户打印一条相关的问候语,并调用它。
第7~10行,判断弹出的元素是否为偶数,如果是,则跳过,反之,则将其增加到创建的空列表list_nums_new中。
代码块
1 def greet_user():
2 """用户登录时,显示简单的问候语"""
3 print("Welcome to login!")
4 return
1 def test_func(x,y):
2 """判断数字大小,返回最大值"""
3 if x > y:
4 max_num = x
5 else:
6 max_num = y
7 return max_num
8
9 max_number = test_func(11,18)
10 print("The maximum is",max_number,".")
代码:
运行结果:
由于使用位置实参传参要求实参的顺序与形参的顺序相同,因此,在调用函数时,必须将函数调用中的每个实参都关联到函数定义中的一个形参。即实参的位置必须与形参的位置保持一致。
第3行,创建一个空字典dict_nums_new。
6.1.2 导入特定的函数
运行结果:
返回值是指函数返回的值,是函数重要的组成部分。由于函数的根本在于实现程序的部分功能,因此,很多时候我们需要将函数执行后的结果返回给程序再由程序作出进一步的操作。此时,可使用 return 语句将值返回到调用函数的代码行。
第8行,调用有参数的函数greet_user(),把实参"Yun" 的值传给形参username。
第6行,每次从列表副本中单出末尾的元素,将其赋值给变量list_num。
运行结果:
函数
(3)函数的第一行语句可以选择性地使用文档字符串,主要用于存放函数说明,描述该函数的功能。文档字符串用三引号括起来,Python使用它们来生成有关程序中函数的文档。
Welcome to Yun !
3. 参数
第8行,调用函数test_func(),提供一个实参:1。
代码:
说明:
例如,我们使用关键字实参来调用1.4.2.1中定义的函数。
导入特定的函数的方法为:
运行结果:
第1行,我们用关键字def定义函数test_func()时,由于不确认函数的形参个数,故用“**kwargs”作为形参。
运行结果:
第13行,调用函数test_func(),将其返回值赋值给变量list。
运行结果:
1 The min is 18 .2 The max is 26 .
编写函数时,应遵循以下规范:
将函数存储在模块中,然后再将模块导入到主程序中。这样做的好处有:
The min is 18 .
7. 函数编写规范
从以上的运行结果可知:
第3行,使用求最小值的函数的别名调用其方法求最小值。
说明:
return [返回值]
第14行,打印返回的列表。
导入整个模块的方法为:
1 def test_func(x,y,z=0):
2 print(x)
3 print(y)
4 print(z)
5
6 print("---第一次调用---")
7 test_func(1,2)
8 print("---第二次调用---")
9 test_func(1,y=2)
10 print("---第三次调用---")
11 test_func(x=1,y=2)
12 print("---第四次调用---")
13 test_func(1,2,3)
14 print("---第五次调用---")
15 test_func(1,2,z=3)
16 print("---第六次调用---")
17 test_func(x=1,y=2,z=3)
运行结果:
Welcome to Yun !
如果需要从某个模块中导入多个函数,可使用逗号分开即可。具体方法如下所示:
第11行,用return语句返回奇数组成的新列表list_nums_new。
因此,给形参指定默认值后,可以在函数调用中省略相应的实参。但是形参列表中,默认值只能放到其他形参的后面,这样才能使Python解释器能够正确的解读位置实参。
导入所有的函数的方法如下:
5.5 返回一个字典
例如,创建一个函数接受两个参数,然后返回最大者。
1 ---第一次调用---
2 ()
3 ---第二次调用---
4 (1,)
5 ---第三次调用---
6 (1, 2)
7 ---第四次调用---
8 (1, 2, 3)
9 ---第五次调用---
10 (1, 2, 3, 4)
11 ---第六次调用---
12 (1, 2, 3, 4.5)
有时候,我们无法预先知道函数需要接受多少个位置实参,因此,我们可以使用 ‘*args’ 定义形参, Python函数会从调用语句中收集任意数量的位置实参进行处理。
例如,创建一个函数接受一个字典,然后将值的奇数的键-值对组成一个新字典返回。
例如,调用模块max_min_num中的所有函数。
运行结果:
从以上的运行结果可知:
代码:
1 File "F:/PyProject/s14/day3/test_function.py", line 10
2 test_func(x=1,y=3,3)
3 ^
4 SyntaxError: positional argument follows keyword argument
(1)函数代码块以 def 关键字开头,然后空格加函数名称,圆括号(),以冒号(:)结尾。其中,若函数有参数,则将其放在括号中,若有多个参数,则用逗号分开。
从以上的运行结果可知,当我们调用形参个数不确定,且用“**kwargs”作为形参的函数时,我们只能使用关键字实参传值,并且会将指定的关键字实参当作字典的形式输出。
因此,在导入模块或者模块中的函数时,如果模块名称和函数名称比较尝,都可对其指定别名,在调用时,使用其别名即可。
从以上结果可知,从一个模块中分别导入的特定函数和导入所有函数的方法,其调用该函数的效果是不变的。
1 def test_max(x,y):
2 """判断数字大小,返回最大值"""
3 if x > y:
4 max_num = x
5 else:
6 max_num = y
7 return max_num
代码:
根据不同的需要,模块的导入方法也很多。模块的导入方法有:导入整个模块、导入特定的函数、导入模块中的所有函数。并且,我们可使用as来给导入的函数活或者模块指定别名。
第6行,使用求最大值的函数的别名调用其方法求最大值。
(5)函数让代码更容易测试和调试。
1 def test_func(x,y,z):
2 """接受三个参数值,并打印它们"""
3 print(x)
4 print(y)
5 print(z)
6 return
7
8 print("---第一次调用---")
9 test_func(x=1,y=3,3)
10 print("---第二次调用---")
11 test_func(x=4,y=5,6)
说明:
1 def greet_user(username):
2 """用户登录时,显示简单的问候语"""
3 print("Welcome to ",username,"!")
4 return
5
6 #有参函数调用
7 greet_user(username="Yun")
返回值作用:能够将程序的大部分繁重工作移到函数中去完成,从而简化主程序。
(2)错误提示未指出第11行的错,这是因为Python时解释型语言,但前面的代码出错了,若没对异常进行处理,那么就停止,不再运行后续的代码。
代码:
例如,定义一个欢迎用户登录时的问候语的函数。
代码:
默认值就是指定的常量。当我们编写函数时,可以给每个形参指定默认值,然后在调用函数时,如果给形参提供了实参,则使用提供的实参,否则使用形参的指定的形参的默认值。
5. 返回值
(3)有多个不确定个数的关键字参数
从以上的运行结果可知,指定的位置实参的值不同,其函数返回的值也不同。
(2)有多个确定个数的关键字参数
(1)代码重用。在程序中,可以调用已经有的函数,不用再重新写实现代码,故使得代码简洁,并能实现相同的功能。
2. 函数的作用
代码:
第10行,调用函数test_func(),提供两个实参:1、2。
(2)保持一致性。在程序中,可能会多处需要实现同样的功能,如果每次都写一遍实现,不仅浪费时间,使代吗臃肿,不易读,还可能没处实现的功能有差异。但调用同一个函数,若函数修改了,则其他调用的地方都跟着改变了。这样不仅具体功能实现保持了一致,还能使代码更整洁,易读。
(1)只有一个关键字参数
说明:
(1)可简化函数调用。
代码:
1 def test_func(x,y,z):
2 """接受三个参数值,并打印它们"""
3 print(x)
4 print(y)
5 print(z)
6 return
7
8 print("---第一次调用---")
9 test_func(x=1,y=3,z=3)
10 print("---第二次调用---")
11 test_func(x=4,y=5,z=6)
|
Я программирую на Питоне шесть лет. За это время назрели некоторые мысли насчет языка и его роли в ай-ти. Я решил, что следующим погружением на несколько лет будет что-то функциональное. Зафиксирую соображения, пока не утратил контекст.
Ниже – попытка обозначить слабые и сильные стороны Питона. Постараюсь, чтоб не было банальщины в духе “много библиотек и широкое комьюнити”. Все сказанное – исключительно личное мнение, которое не навязываю никому.
Итак, преимущества:
Питон – коммерчески успешный язык
Приятно осознавать, что Питон используют не только в академических и любительских кругах, но и в бизнесе. Компании доверяют ему деньги. Много фирм используют Питон – Гугл, Пейпал, Инстаграм, НАСА.
Питон создает рабочие места. Вакансий много, в том числе и в России. Питон – промышленный язык. Он стоит в одном ряду с Плюсами и Джавой по востребованности. Разработчик на Питоне имеет шанс попасть в настоящую разработку продукта. Это не суррогат местного потребления вроде 1С.
Питон – очень легкий язык
Вместе с тем, Питон чрезвычайно легко освоить. Основы учатся за неделю. Еще неделю займут пакеты и подготовка окружения. В итоге, меньше чем за месяц можно добиться места в проекте.
В Питоне мало сложных тем. Так я называю разелы языка, на изучении которых останавливаешься отдельно. В Лиспе один сложный момент – макросы. В Скале больше – трейты, импликты, составные объекты. В Питоне два – декораторы и метаклассы. В повседневной работе метаклассы не нужны, пожтому остатется только одна сложность. Разобравшись с декораторами, вы не встретите в языке других препятствий.
Питон сильно приблизил програмирование к людям
Я ни разу не видел, чтобы этот пункт кем-то упоминался. Питон стал языком, с которым любой инженер, ученый или технарь без навыков программирования могут решить задачи промышленного уровня. Питон – инструмент решения проблем.
Появились специальные проекты и книги, например, Python For Engineers, A Primer on Scientific Programming with Python и другие. На работе я видел скрипты на Питоне, написанные людьми, у которых все знание программирования сводилось к школьной программе. Но скрипты работали, автоматизировали труд, а значит, экономили время и деньги.
С Питоном управлять машиной стало проще. На каждую задачу найдется копипаста со Стек-Оверфлоу. Что бы ни загуглили – разбить строку, построить график, перемножить матрицы – в первых трех ссылках выдачи обязательно будет решение.
Конечно, подобный стиль простителен только не-программистам, когда важно получить результат, а не надежный поддерживаемый код.
Питон – легкий способ попасть в промышленную разработку
Напоминает тезис о том, как попасть в шоу-бизнес через постель. Если вы занимаетесь чем-то около-айтишным, например, администрированием, сетями, поддержкой, 1С или старыми языками вроде Дельфи, то Питон – счастливый билет. Из пунктов выше мы выяснили, это что легкий язык промышленного уровня. Возможно, он станет той подножкой уходящего поезда, на которую успеете вскочить, прежде чем расхочется что-то менять и превозмогать.
Абзацем выше я фактически рассказал свою историю. До знакомства с Питоном я занимался сайтами на CMS, Дельфями и 1С. Конечно, и с этим можно найти работу. Однако, именно за счет Питона я продвинулся на старой работе, потом переехал в другой город за 6000 км и попал в Датаарт. Позже нашел удаленку в Европе.
Питон поддерживает все парадигмы программирования
Питон удивидельно гибок. Его дизайн и синтаксис в равной степени поддерживают большинство парадигм. Стандартное ООП, императивный подход, процедурно-модульный, функциональный.
Каждая парадигма реализована в неполной манере. Так, в ООП нет приватных переменных и интерфейсов. Двойное подчеркивание – хак и обходится на раз. Интерфейсы пишут миксинами, но точное следование интерфейсу отследить невозможно.
Для полноценного ФП не хватает полноценных лямбд и неизменяемых коллекций. В тройке функцию
reduceзапрятали в недра стандартной библиотеки.
Ленивые вычисления, декораторы и перегрузка операторов открывают пространство для интересных маневров. Питон мимикрирует под самые разные языки, например:
Fake Lisp – псевдо-Лисп, забавная попытка писать питонячий код S-выражениями.
Hask – закос под синтаксис Хаскела. Питонщикам очень полезно посмотреть, на какие ухищрения пошел автор, чтобы добиться столь точного сходства.
fn.py – функциональные ништяки из Скалы. Аналогично, очень интересно взглянуть, что под капотом.
Резонный вопрос, зачем такая гибкость? Мой ответ – чтобы оттачивать паттерны и подходы, расширять кругозор, принимать на вооружение лучшие практики из других языков и платформ.
Например, одна функция из Кложи, портированная в Питон, сэкономит много строк кода и избавит от досадных багов. Или, встретив новый паттерн, я пробую реализовать его в Питоне, чтобы оценить, насколько он уживается в рамках большого проекта.
Закончим с преимуществами. Питон не идеален. Вижу следующие недостатки:
Мелкие огрехи с синтаксисом
У Питона лаконичный синтаксис без лишних скобок, точек с запятой и прочей мишуры, нужной машине, а не человеку. Все же, остаются способы отстрелить ногу и проверсти час в дебаге, не понимая в чем дело.
Формирование кортежа. Может, кто-то не знает, что круглые скобки вокруг кортежа носят символический характер. На самом деле он определяется запятой. Возникает проблема кортежа с одним элементом:
foo = 1, 2 # it's tuple
bar = 1, # it's tuple
baz = 1 # it's int!
Забыл запятую – получил не кортеж, а число. Оставил на хвосте запятую – не число, а кортеж. Не страшно, код упадет из-за разных типов. Начнется боль, когда кортеж строк. Строка и кортеж поддаются общим операциям: итерации, доступу по индексу, срезу. Код отрабатывает без падений, но выдает неверный результат. Из-за одной запятой.
Следующий пример. Две строки подряд без запятой сливаются в одну. Это значит, кортеж
statuses = ( 'success', 'error' 'timeout', 'denied', )
вырождается в
('success', 'errortimeout', 'denied')
потому что нет запятой после
'error'. Проверка'error' in statusesвернетFalse, что есть прямое нарушение бизнес-логики.
Код на Питоне очень хрупкий и нуждается в обильном покрытии тестами. На шестом году работы с ним я до сих пор не уверен, правильно ли отработает тот или иной участок кода без теста.
Главенство ООП
Хоть Питон и поддерживает множество парадигм, основной остается ООП. Типичный фреймворк или библиотека – это набор классов, которые хранят состояние и меняют друг друга.
Вы когда-нибудь дебажили Джанго? Я да. Это был кастомный бекенд авторизации. Дебаг длился больше часа и напоминал БСДМ-сессию.
Объекты
Request,Response,User,Sessionустраивают свальный грех. Устанавливают ссылки друга несколько раз. Лезут в кэш непонятно зачем.
Очень странные требования к бекенду авторизации. Метод
.get_user()отдает объект пользователя. Потом этот пользователь где-то теряется, на него не остается ссылки. Система запрашивает метод.get_user_by_id(). Я ведь уже отдал пользователя, дубина! Значит, нужно или снова лезть в базу или сеть, либо хранить в классе словарик, что не тред-сейф.
Почему-то в Кложе с ее неизменяемыми коллекциями мне ни разу не приходилось лазить в сессию, реквест или респонс задним числом и что-то там править. Изменения в коллекции прокидываются только вперед.
Я не призываю программировать в сплошном ФП-стиле. Признаться, все эти статьи в духе “Программирование на Питоне в функциональном стиле” выглядят жалко. Совершенно за уши притянуты редьюсы и трехэтажные лямбды. Такой код не уживается в рамках проекта и будет выпилен, даже если его удастся как-то протащить.
Это наносит урон функциональному программированию. Распространяется заблуждение, что ФП – это мапы и лямбды.
В Питон было бы неплохо добавить что-то, что поощряло отказ от состояния. Например, неизменяемый словарь, больше функций для работы с коллекциями.
Впрочем, это был бы уже совсем другой язык.
Низкая скорость
Еще год назад говорил, что это не недостаток вовсе. Память дешевая, хостинг недорогой и прочие отмазки. Сегодня считаю такой ответ уделом дилетанта.
Скорость исполнения важна. Представьте, весь ваш софт на компе написан на Питоне. Он бы работал раз в 20-30 медленней, об играх и фотошопах пришлось забыть. Это не просто шаг назад, а откат на несколько поколений.
Недавно мне нужно было парсить лог Nginx, Гугл выдал утилиту ngxtop. Я не посмотрел, на чем она написана, поставил из пакетов. Лог в несколько гигабайт она обрабатывала час. Оказался Питон. Утилита на Си распарсила за 5 минут. Неверный выбор стоил потери времени.
Питон долго брал скоростью разработки. Да, работает медленней, зато выкатим раньше, чем конкурент на Джаве. И вот появляются языки, которые совмещают скорость кода и скорость разработки. Если писать на
XиYпо времени примерно одинаково, аYисполняется быстрей в 5 раз, зачем братьX?
Насчет дополнительного железа. Говорить, что поставить вторую ноду – плевое дело может лишь тот, кто пишет код и не притрагивается к деплою. Несколько нод – это распределенная система. Нужен балансировщик, разделение логов, проблемы синхронизации, согласованные обновления.
В моем личном проекте долго жил Питон, нагрузка постоянно росла. Сервер задыхался, я не успевал поднимать тарифный план. Затем плюнул и переписал на Кложе. Не забуду это чувство, когда график CPU упал с 80 до 20 процентов.
ГИЛ
На упрек в адрес ГИЛа отвечают в духе “он вам не нужен”, “используй процессы”. Так говорил и я до тех пор, пока не познал, как работает многопоточность в функциональных языках.
Мы запускаем веб-приложение на Питоне под uWSGI так. Стартует мастер-процесс. Он управляет воркерами – процессами, в каждом из которых веб-приложение. Единовременно запущены 16 Джанг, каждая из которых отвечает, когда другая занята.
Мастер-процесс следит, чтобы все были равны. Убивает тех, кто не отвечает, выжрал слишком памяти или перетрудился – выработал лимит запросов.
Я считаю этот способ уродливым. Во-первых, налицо недоверие к технологии и ее нестабильное поведение. При длительной работе процессы текут, даже мастер-процесс. Это значит, систему нужно перезапускать.
Распараллеливание на процессах лишает общего доступа к памяти, вынуждая использовать суррогаты вроде Memcache или Redis, в то время как самый эффективный кеш – общая память.
Мастер-процессы вроде uWSGI и Green Unicorn – дополнительная прокладка в цепи, по которой проходит запрос. Лишний I/O, точка логирования и падения.
Наконец, с ГИЛом нельзя ничего распараллелить, разве что запросы в сеть. Функциональные языки лишены этих недостатоков. Неизменяемость коллекций в Кложе, система каналов в Гоу позволяют рулить многопоточностью без страха отстрелить ногу. Это выкашивает целый пласт инфраструктуры: костыли в виде очередей, воркеров и крон-скриптов. И дополнительные системы, чтобы все это поддерживать.
Трудности распространения
Приложение, написанное на Питоне, трудно распространять. Маленький скриптик требует интерпретатор определенной версии, виртуальное окружение с кучей библиотек. На вопрос, как перенести окружение на машину пользователя, нет точного ответа.
Несколько лет назад я пробовал различные упаковщики:
Py2Exe,PyInstaller,CxFreeze. Собрать без ошибок приложение под Винду смог только последний. Сделал это очень хорошо: выдал *.msi-инсталлятор, зарегистрировал системную службу. Но времени на сборку и чтение доков ушло немало. Почему разработчики Питона не озаботятся тем, чтобы включить в коробку систему распространения конечного продукта?
Чтобы написал скрипт, запустил сборку и получил архив с исполняемым файлом. Почему функциональные языки это умеют, а Питон – нет?
Кложа выдает uberjar. Собираю его на Маке, загружаю на хостинг, где кроме Джавы ничего нет – работает без проблем. Хаскель компилируется в бинарь. Ракет, Раст – тоже. Коммон Лисп, которому тридцать лет, Карл, делает дамп Лисп-машины в бинарный файл!
Фактически, чтобы запустить веб-приложение на Питоне, нужно иметь примерно 10.000 *.py-файлов. Если не будет одного, система не заведется. Изолировать это многообразие можно Докером, но здесь мы кривим душой.
Листая документацию к третьему Питону, обнаружил занятный модуль zipapp. В первую минуту подумал, что напрасно упрекал язык, дело сдвинулось с мертвой точки. Оказалось, модуль не поддерживает виртуальное окружение и сишные библиотеки. Собрать простое приложение с драйвером Постгреса я не смог.
Заключение
Такие получились пункты. Надеюсь, никого не обидел критикой в адрес языка.
Ничуть не жалею, что потратил на Питон столько лет. Пусть следующий язык внесет в мою жизнь не меньший вклад.
Прочитал статью Лисп – философия разработки от 2010 года. Решил законспектировать заключительынй раздел с выводами – очень уж понравились.
Если рассмотреть каждый отдельный язык, то, как правило, можно найти его объединяющую идею. […] Для Common Lisp объединяющей идеей на данный момент является как раз расширяемость.
Не могу с точностью судить о CL, но расширяемость в Кложе(скрипте) сделана на высоте. Пожалуй, это единственная экосистема, где любой разработчик может внести свой контриб в чужой тип или протокол без обезьяних патчей, как в Руби и Питоне. Напоминает перегрузку операторов, но в масштабах всей Лисп-машины. Это надо просто попробовать.
В то же время у нее [Джавы] есть и сильная сторона, а именно пронизывающая весь язык модель семантической расширяемости через интерфейсы. Так что в общем-то не удивительно, что язык Clojure появился именно на Java-платформе…
Да, единственно хорошая вещь в ООП – это интерфейсы. Недавно я узнал, что в Кложе(скрипте) вообще все с ног до головы построено на интерфейсах. Это и придает языку такую мощь.
Объектно-ориентированный подход обещал, что базовыми компонентами повторного использования будут классы и их наборы. Однако, как показывают провалы многих технологий, таких как JavaBeans, это тупиковое направление. Ведь отдельные классы не могут быть независимыми компонентами, потому что в этом случае они рискуют превратиться либо в пространства имен, либо в монолитные монстры, противореча самой идее декомпозиции, основанной на классах.
Подпишусь всеми конечностями. Не раз видел: есть жирный класс, нужно создать похожий. И почему-то разработчик, который вчера заливал про преимущества ООП, выделяет мышкой и копипастит в соседний модуль. А как же наследование, спрашиваю я? Все понятно. Классы хоть и похожи, но приведение к общему знаменателю будет стоить так дорого, что просто невыгодно. И юнит-тестов может не хватить.
В тех же языках, где расширяемость затруднена […] есть тенденция к постепенному включению всего необходимого в стандартную библиотеку или же в какие-то крупные фреймворки, каждый из которых зачастую развивает собственный механизм подключения и повторного использования. Здесь структура зависимостей — это дерево или же несколько отдельных слабо пересекающихся деревьев.
Вспомнился бедовый Джаваскрипт. Не везет языку. При всех возможностях к расширению через прототипы, разработчики педалат тысячи микро-модулей по 10 строк каждый. Расширять стандартную библиотеку никто не думает. Так сообщество двигается в пропасть.
В этом смысле показателен пример Python, известного своим девизом «Батарейки в комплекте», который подразумевает наличие в стандартной библиотеке (почти) всего, что нужно для разработки. […] Обратная сторона медали тут отмечена в другой цитате: «Пакет, который попадает в стандартную библиотеку, стоит одной ногой в могиле»
Меткое замечание. Не случайно автор библиотеки requests, ставшей де-факто для HTTP-запросов, заключил с Гвидо неформальное соглашение. Согласно ему,
requestsНЕ включается в стандартную библиотеку (чего так хотелось Гвидо), но рекомендуется к использованию на страницах официальной документации. Хитрый мужик! Знал, чем чревато.
В Common Lisp для этого [построения расширяемых систем] используются следующие технологии: макросы для создания мини-языков для декларативного описания систем, дистрибутивов (в отличие от использования для этих целей внешних языков, таких как XML)…
Да, достоинство Лиспов – всегда можно написать доменный язык под любую задачу благодаря системе макросов. В Лиспе построение XML, HTML, SQL вырождается в декларативное дерево с одноименными узлами (cdata, div, select).
Кому интересно, загляните и в другие статьи Практики функционального программирования. Актуальность они не потеряли и еще долго ее сохранят.
Методом проб и ошибок выработал набор практик, с которыми работаю лучше. Стал допускать меньше ошибок, легче отслеживаю бизнес-логику, быстрее вношу изменения в код. Эти практики не следуют строго определенным парадигмам. Наверняка под каждую придумали паттерн, но я об этом ничего не знаю и расскажу простыми словами.
Первое. Указываю тип переменной в имени
Задаю переменным имена по правилу
<entity>_<type>. Открыв код месячной давности, сразу вижу, где и какие типы. Даже самая коммерческая ИДЕ порой не может понять, с чем имеет дело. А с моим правилом именования работать одно удовольствие.
Применяю его не слепо, а выборочно. Скалярным типам, например, строкам и числам, не указываю тип, если он ясен из контекста. Выражения
name_str = 'test'илиage_int = 42избыточны, поскольку имя и возраст вряд ли могут быть чем-то отличным от строки и целого.
Я добавляю в конце тип, если он неочевиден из контекста. Предположим, из ответа чужой апихи пришло поле
permission. Что это – строковое имя, числовой код, булево – понять со стороны невозможно. Все, что я могу – слазить в документацию или промотать в другое место, чтобы увидеть, что с этим полем делают дальше.
permisson = response['data']['item']['permisson']
# wtf is permisson?
А ведь достаточно назвать переменную
perm_int, и все станет ясно – это же числовой код!
Указывать тип стоит везде, где кроется неожиданность. Айдишка объекта может быть передана строкой, поэтому назову переменную
user_id_str, а дальше преобразую в инт. Поле может называтьсяitem, а внутри – гуид сущности, а не словарь. И так далее.
Коллекциям задаю тип без исключений. В Питоне достаточно много разных коллекций. Чаще всего нас интересует только итерация, но шаг в сторону, и программа падает.
Примеры? Хотели список, чтобы изменять элементы, а пришел кортеж. Итерация по множеству проходит в разном порядке. Хеш от списка вызывает исключение. Пройтись по итератору можно только один раз. И так далее.
Если работаю со словарем с данными о пользователе, называю
user_dict. Список пользователей –user_list, множество –..._setи так далее, принцип, думаю, понятен. Для кортежей и итераторов окончанияtupl,iter.
Отдельно стоит упомянуть тип
Queryset, с которым постоянно работаешь в Джанге. Обозначаю его какqs. С этим типом сплошная беда. Он всеми силами мимикрирует под список и кидает в неподходящий момент.
Смотрит коллега в монитор и не понимает, отчего Pytest падает и выводит следующее:
assert [1L, 2L, 3L] == [1, 2, 3] >>> long trace...
Потому что справа список, а слева – квери-сет. Он выводится как список, но не равен ему.
Отдельным абзацем замечу, что не приемлю лексемы
dataв именах переменных.user_data,response_data– ужасные имена. Любая переменная, даже ноль, уже несет данные. Понятней не стало. Это словарь, список или что?
Добавлять на конце
sтоже нет смысла. Коллекция подразумевает больше одного элемента. Если не указан тип, я опять в беде:users– это сет, словарь или кортеж? Можно ли брать слайс? Подставить в ключ словаря?
Падение на
None(он жеnil, null, undefined, etc) – особая история. В программировании до сих пор нет понимания, что делать с пустыми типами. Чтобы обезопасить код, полезно явно задать имя вродеuser_or_noneили, для краткости,user_none. Это вынудит программиста выполнить проверку перед тем, как что-то делать с данными.
Второе. Избегаю циклов
ручное накопление списка или словаря чревато багами
со временем цикл разрастается, обрастает вложенными for, if
из-за отступов плохо видно бизнес-логику
цикл поощрает плохую практику – впендюрить continueвместо того, чтобы отфильтровать данные до входа в цикл
цикл плохо поддается рефакторингу, поскольку затягивает контекст – коллекцию-результат, локальные переменные, вложенные циклы.
Решение – использовать функции высшего порядка
map,filter. Я отрицательно отношусь к трехэтажным лямбдам. Использую обычные функции, объявленные черезdef/func/defn.
def get_item_list(user_id):
def get_item(product):
...
item_qs = models.Item.objects.filter(user_id=user_id)
return map(get_item, item_qs)
Я просто объявляю функцию в том месте, где она нужна, и не парюсь за производительность или дзен. Код становится на рельсы:
коллекция –> фильтрация –> действие –> другая коллекция –> свертка. Появляется ощущение структуры программы, приходит упорядоченность.
Добавить новое бизнес-правило в такой код очень легко. Это будет или еще один фильтр, или изменится действие над элементом. В любом случае не съедет весь код, как в примере ниже:
items = users.get_items()
res = []
for item in items:
if items.color = 'red':
continue
res.append(item.id)
Приходит эффективный менеджер и говорит, что теперь операция должна выполняться по всем юзерам. Не вопрос, отвечает программист и тупо сдвигает табом:
res = []
users = get_all_users()
for users in users:
items = users.get_items()
for item in items:
if items.color = 'red':
continue
res.append(item.id)
Дифф покажет полную замену кода. Добавить сюда еще пару вложенных условий, перехват ошибок, запись в лог – и код останется выкинуть на помойку.
Простое правило “дейтвие, коллекция, мап, свертка” работает без нареканий и легко адаптируется под новые требования.
Третье. Отлавливаю ошибки как можно раньше
Почти любая операция небезопасна и может кинуть исключение. Проблема в том, что одновременно писать бизнес-логику и следить за ошибками трудно. Каждая ошибка – это блок
try-catchи запись в лог, за которыми не видно главную мысль.
Исключения уже вовсе не означают исключительную ситуацию. Они стали сигналами. Тот же Питон кидает и сам отлавливает определенные исключения в ходе работы. Эта практика перешла и в бизнес-логику. Например, когда нет прав на операцию, выбрасывают исключение
PermissionError. Обработчик сверху ловит его и выводит адекватный результат.
Мне не нравится эта ситуация, потому что она ненадержна. Язык не может внятно сказать, какие исключения возникают при конкретной операции. Это может быть описано в документации, но чаще всего выясняется эмпирически.
Не отлавливать свои же исключения неправильно с этической точки зрения. Ты словно говоришь коллегам – вот написал код, но меня не волнуют ошибки. Да, упадет, если в ответе нет ключа. Но ты оберни и залогируй. Превозмогай, это не мои проблемы.
Заворачивать весь код в
try-catch– не выход. Поможет возврат пары, как в Golang. С небольшим отличием – не(ok, result_or_error), как принято в последнем, а(err_or_null, result_or_null), как в Node.js. Второй вариант логическии правильней.
Заворачиваю функцию в простой декоратор:
def err_result(f):
def wrapper(*args, **kwargs):
try:
return None, f(*args, **kwargs)
except Exception as e:
return e, None
return wrapper
Или вызываю just-in-place:
def some_func(foo):
...
err, result = err_result(some_func)(42)
Вариант с мапом. Функция-обработчик раскладывает пару на составные части с помощью деструктивного синтаксиса:
item_queryset = models.Item.filter(...)
def process_item(item):
...
pair_list = map(err_result(process_item), item_queryset)
def process_pair((err, result)):
if result:
# positive brunch
if err:
# negative brunch
map(process_pair, pair_list)
Или отделяю котлеты от мух: разбиваю список пар на плоские списки ошибок и результатов. Отдельно логирую ошибки. Передаю результаты на дальнейшую обработку. Так в коде появляется порядок.
Конечно, в случае с одноразовым скриптом я могу завернуть все в глобальный
try-catch. Но прекрасно отдаю себе отчет в том, какие последствия это имеет в боевом коде.
Заключение
Вот такие принципы я проповедую в текущем проекте. С ними стало работать легче. Меньше падений на типах, внезапных трейсов.
Повторюсь, описанные принципы не идеальны, но с ними возникает чувство порядка. Словно код становится на рельсы, а вместе с ним и процесс. Возникает линейность, предсказуемость действий.
Кто-то скажет, что это не питоник-вэй, что диктатор не велел. Но кому это интересно? Мы пишем код не для Гвидо или Торвальдса, а для начальников, которые в гробу видали все паттерны, главное, чтобы код работал.
Допускаю, что прочту этот пост через 2 года и подумаю, каким
чудакомя был, но пока что так.
Наткнулся на полезный сервис Request Bin.
Частенько бывает, какой-нибудь Твиттер, Фейсбук или Пейпал пингуют ваше приложение. Шлют нотификации о платежах, событиях. Проблема, что приложение еще не на сервере, а интегрироваться как-то нужно.
В документации стороннего сервиса часто описаны не все возможные случаи. Приходится закачивать скрипт-заглушку и писать запросы в файл. Это не то.
Поможет Request Bin. Идея сервиса гениальна. Вам дают уникальный урл, например,
http://requestb.in/1j2bq6r1. На все обращения сервис отвечает 200 ОК, а сам записывает входящий запрос в память. По адресуhttp://requestb.in/1j2bq6r1?inspectсервис покажет последние 20 запросов: дату, метод, заголовки, параметры.
Полученный урл указываем в настройках стороннего сервиса. Выполняем операции, смотрим, какие запросы пошли.
При бесплатном использовании запросы хранятся в мемкеше, поэтому в любой момент данные можно потерять. Впрочем, мои запросы хранятся уже третий день.
Request Bin поддерживает соединение по протоколу HTTPS, правда, с самоподписанным сертификатом. Это помогло при интеграции с Пейпалом. Последний требует, чтобы урл начинался с
https://, но не проверяет сертификат.
Сервису есть, что улучшить. Если в теле запроса был json, парсить и показывать красиво. Сейчас выводит как пришло – в одну строчку. Добавить экспорт запросов в любой формат. Показывать сырой запрос (т.н. raw HTTP).
Нет, это не про встроенный в Киндл браузер. Я продолжаю эксперименты с читалкой, купленной три месяца назад. Пытаюсь понять, как с ее помощью навести порядок в чтении.
Раньше, найдя в сети большую интересную статью, терялся от незнания, что с ней делать. Читать сейчас – времени нет. Держать открытой вкладку? Уже 20 открыто, куда больше. Остается жать “Добавить в избранное” и забывать о статье навсегда.
Статья в избранном – это закрытая вкладка. Энтропия и шум настолько сильны, что по факту к статье вы не вернетесь.
День за днем растут списки “прочитать позже”. Когда открываешь наконец что-то из списка, понимаешь, что на сегодня уже выработал потенциал. Все, на что тебя хватит – пара линков из чата со смешными картинками.
Кто-то шлет интересные статьи себе на емейл. Как Столлман, но тот всего лишь боится проприетарного JS-кода. Уже лучше, чем закладки: централизованное хранилище, доступ с любого устройства, сносная поддержка мультимедии.
По моему опыту долгое и плодотворное чтение за ноутом невозможно. Ноут – это инструмент для работы и коммуникаций, а чтение – удовольствие для себя. Поэтому, когда садишься с читалкой, мысленно соглашаешься с тем, что сейчас твое время. А за ноутом сто раз отвлекут коллеги.
Кинлд помог разрешить этот вопрос. Ключевая особенность Киндла в том, книжки закачиваются по емейлу. И не только книжки, а любые документы. Появились сервисы, которые шлют емейл с документом на емейл в Амазоне, а книжка качает из облака.
Прекрасное расширение для Хрома и Лисы закачивает любую веб-страничку в Киндл. Удаляет весь мусор, красиво компонует текст. На экране читалки смотрится изумительно.
Теперь с материалами я работаю так: листаю любимый RSS-клиент, если нахожу что-то стоящее, клик – и в Киндл. Читаю во время прогулок с сыном. Летом посидеть на лавочке с читалкой – красота. Технический минус, расширение не поддерживает картинки. Очень странно. Сделали бы премиум-фичей, я бы заплатил.
В прошлый вторник прошел слаконар на канале Хекслета об удаленной работе. Я рассказал кое-что, ответил на вопросы. Публикую ниже отредактированную версию чата – с исправленными формулировками, удобной структурой. Сюда же добавил некоторые вопросы, которые задавали в чате чаще других.
Выложить этот материал я планировал в декабре, когда моей удаленке исполнится год. Но если уж рассказал в слаке, тянуть смысла нет.
Содержание
Введение
Общий порядок работы
Контракт, отпуск, увольнение
О слежке
Виды удаленки
О поиске удаленной работы
Общение
Английский
Деньги
Налоги
Как жить с удаленной работой
О прокрастинации
Другое
Введение
Последние 6 месяцев я работаю удаленно в лондонской фирме. Я задумался об удаленке примерно за полгода до ухода из прошлой компании. Начал постепенно собирать информацию. Нашел вакансию, связался, выполнил практическое задание, прошел собеседование – и все заверте…
Собеседование стандартное. Кто такой, какие проекты, что умеешь. Просили спроектировать игру “Монополия”. Задавали вопросы на тему сети, пингов.
Задание было написать геопоиск без базы данных, на уровне структур. Кстати, это было довольно интересно. Построение дерева, обход и тд. Иногда я мысленно возвращаюсь к нему и пытаюсь найти более оптимальные способы решения. Попутно я узнал, что такое геохеш, кривые Лагранжа и прочие штуки.
Я общаюсь с другими удаленщиками, в том числе и не с нашего проекта. Не все, о чем пойдет речь ниже, я опробовал лично, но слышал от коллег.
Для начала разберемся с терминологией. Не стоит путать удаленку с фрилансом. Разница большая. Удаленный сотрудник – это полноценный член команды. Он разделяет ценности компании, заинтересован в ее росте. Сотрудник может работать удаленно в течение долгого времени, получать повышения. Напритив, фриланс – это большей частью мелкие задачи. Сотрудника волнуют проблемы фирмы, ее рост и вектор развития, фрилансера – нет.
При фриланс-биржи вроде ODesk и Upwork я не особо в теме, поэтому не буду о них говорить. В моем видении удаленная работа носит более высокий приоритет, чем фриланс.
Общий порядок работы
Я работаю 5 дней по 8 часов, 40 часов в неделю, 2 выходных, как в РФ. Переработок еще не было. Труд оплачивается по часам. Есть понятие рейта — N долларов в час. Проходит месяц, вы умножаете рейт на 8 и на число дней. Выставляете счет на полученную сумму.
Обычно заказчик и работник находятся в разных часовых зонах. Оговаривают, каким будет временное пересечение. Например, 5 часов в течение суток. Если требуется постоянная связь на случай факапав, это оговаривается отдельно.
Всегда можно отлучиться по важному делу, сообщив заранее в слак или скайп.
Контракт, отпуск, увольнение
Контракт высылают на почту. Подписываете и отправляете скан. Таким же образом происходит работа со всеми другими документами. Слать нотариально заверенные копии не нужно.
Отпуск не оплачивается. Вы уходите на две недели – получаете зарплату в два раза меньше. Отдыхали месяц – зарплаты нет. Впрочем, отдых длиною в месяц в ай-ти – штука мифическая.
Порядок увольнения может быть оговорен по желанию сторон. Чаще всего компания оставляет за собой право расстаться без объяснения причин в любой момент. Предупреждают за неделю или около того.
Это краеугольный камень в удаленной работе – ты ничем не защищен. У заказчика не болит голова, если нужно избавиться от человека, в котором ошибся.
Причины увольнения тоже могут быть разные. Может, компания освоила бюджет и вы теперь больше не нужны. Иногда удаленщиков берут, чтобы тупо освоить инвестиции. Или у вас не сложилось из-за трудностей в коммуникациях, бывает. Не стоит убиваться, делайте выводы и ищите новую работу.
В чем основной профит удаленки по сравнению с фрилансом? В том, что это работа, рост, повышение компетенции. Чем больше ты знаешь о фирме, тем сложней тебя уволить. В штате бывают настолько компетентные удаленщики, что их мгновенное увольнение ставит бизнес под угрозу. Они выходят из фирмы по полгода – настолько велика их погруженность.
О слежке
В чате было много вопросов о том, следит ли за мной заказчик. И как контроллируется процесс. Конкретно в моем случае мне
НЕ нужноустанавливать особое ПО, которое следит за десктопом и шлет скриншоты начальству. У нас Джира, в задачах выставлен примерный эстимейт, который предварительно обсуждаем всей командой. Взявшись за задачу, ты должен в него уложиться, либо объяснить, на что уйдет дополнительное время.
От людей, работающих через Crossover, я знаю, что им ставят программу, которая трекает пребывание в определенных программах. Например, столько времени провел в редакторе, столько – в браузере, столько – в почтовом клиенте. Какие программы или вкладки в браузере считать полезными – это оговаривается с заказчиком заранее.
Отработать 8 часов полностью нереально. Есть определенный КПД. По моим подсчетам, в плодотворный день удается поработать 6 часов. Чтобы сделать 8 полных часов, надо сидеть за компом 11 часов. Ноу лайф, словом. Максимум на день-два.
Чем больше закачик доверяет личному общению взамен программы-шпиона, тем лучше. Я лично считаю, достаточно 5-минутного разговора, чтобы понять, работает человек или валяет дурака. А то, что он потратил час на блог или личный проект, роли не играет.
Виды удаленки
Я разделяю два вида удаленки – внутреннюю (в рамках текущей страны, для меня – России) и внешнюю – Европу, США, Сингапур и тд. Кратко опишу первый пункт, чтобы затем на него не отвлекаться.
Внутренняя удаленка не сильно отличается от обычного трудоустройства. Недавно из интереса я собеседовался на удаленную вакансю по Clojure в Москве. Обычное собеседование с техлидом. Вы устраиваетесь официально, подписываете документы по почте, отправляете трудовую книжку. Вам присылают карточку, на которую раз в месяц перевоят зарплату. Разница в том, что сидите не в офисе.
Или заключаете трудовой контракт на определенный период. Или вообще ничего не заключаете, а деньги вам скидывают на карточку или Яндекс-деньги, и все держится на доверии.
Гораздо интересней говорить на тему внешней удаленки. Здесь сразу куча всего.
Какие плюсы и недостатки есть в удаленноой работе? Пройдусь кратко, затем каждый пункт проговорим подробно.
Деньги.Европейская зарплата выше российской. С другой стороны, получение валюты из-за рубежа влечет за собой некоторые трудности. Придется открыть ИП, регулярно платить взносы, налоги, сдавать отчетность.Ответственность.Удаленка мне кажется более высокой формой сотрудничества, потому что повышается ответственность. Дело в том, что много вещей приходится разруливать самостоятельно. Минус, эту ответственность нужно быть готовым принять. То есть теперь вы не только пишете код, но и занимаетесь предпринимательством, платите налоги, обо всем договариваетесь с заказчиком и много всего другого.
При этом еще нужно успевать работать. Когда вы обычный работник, с вас взятки гладки сразу по всем показателям.
Свобода в выборе.Удаленной работы сейчас много, и хороший сотрудник может выбирать, с кем работать. Например, вы отработали 5 лет на PHP, понимаете, что не растете, пора двигаться дальше, в но в городе вакансии только на Пых и 1с. Остается только переезд. Но если у вас семья, дети? Вы же не потащите из через всю страну только потому, что захотелось писать на Хаскеле?
О поиске удаленной работы
Где искать удаленку? Есть хороший сайт Remote OK – агрегатор удаленных вакансий.
Они же попадаются на Stack Overflow с тегом
remote.
Вакансии неплохо ищутся на сайтах конкретных языков или технологий, например, для Питона это раздел Jobs на официальном сайте.
На всех указанных выше ресурсах есть RSS-ленты, что облегчает трекинг.
Есть рекрутинговые агенства, специализирующиеся на особых технологиях, например, Functional Works – подбор программистов на функциональных языках. Чтобы увидеть расширенную информацию на борде, надо зарегистрироваться. Сейчас там висят 3-4 удаленки. Из прямого общения с рекрутером можно вытянуть больше.
Наконец, лучший источник – это профиль на Линкед-ине. Нужно заполнить профиль максимально полно, указать проекты, образование, степень знания английского, сертификаты, волонтерские проекты. И что важно, скилы. Попросите заказчиков отписать вам рекомендации. После этого рекрутеры начнут наплывать толпами.
Я предполагаю, срабатывает какой-то алгоритм, согласно которому, если человек часто обновляет профиль, он заинтересован в поиске работы, и его профиль чаще всплывает в поиске. Но это только догадка.
В чате был хороший вопрос о том, как понять, пора ли идти искать удаленку. На мой взгляд, удаленке должна предшествовать работа в промышленной разработке. Вы должны уметь работать в команде, владеть целевой технологией, иметь в портфолио несколько проектов. Словом, пройти боевое крещение.
С другой стороны, если нацелены на удаленку, не стоит оттягивать процесс. Крупные компании могут усыплять бдительность. Повысят в должности, прибавят зарплату. Печенек отсыпят, в пейнтбол повезут. А потом семья и дети, нужна стабильность, поезд ушел – менять что-то будет страшно.
В удаленке востребованы все те же промышленные языки и технологии, что и в индустрии в целом. Плюсы, Джава, Пых, Питон, Руби, Шарп, Джаваскрипт. Встречается Гоу и функциональные вещи вроде Кложи, Скалы, Хаскела. Везде без исключения надо знать БД, алгоритмы и основы веба.
Общение
В удаленке очень важны коммуникации, я это понял не сразу. Удаленного сотрудника оценивают в первую очередь по тому, насколько легко получить от него отклик. Это значит, нужно быть готовым говорить, обсуждать.
Как правило, удаленщик находится в другой часовй зоне, поэтому оговаривают, в какие часы сотрудник должен быть на связи. Наиболее благоприятный случай – когда зоны различаются на 2-3 часа, как Москва и Лондон, например. Люди, которые работают с США, испытывают трудности из-за слишком большой разницы во времени. У вас 18-00, а у них только утренний кофе. Постепенно ваш распорядок съезжает на ночное время, что плохо для здоровья и социальной составляющей.
В штате могут быть другие русскоговорящие сотрудники, на на созвонах это не поможет – все должны понимать друг друга. Разве что, они могут дать полезные советы с глазу на глаз или подбодрить, если дела идут плохо.
Используют стандартные средства связи – слака, скайп, хенгаутс, почта. Нельзя шутить про секс-меньшинства, цвет кожи, беженцев и другие больные для европейского общества темы.
Английский
В общении все упирается в английский. Чтобы преуспеть в удаленке, нужно понимать тонкости языка. Бывает, потребуется защищать свою точку зрения. Вы предложили использовать новый фреймворк, отвечают – подготовь презентацию и расскажи, какая польза бизнесу. И будь фреймворк трижды крут, вам не дадут добро, если не сможете донести мысль.
Никогда не нужно извиняться за язык или обращать внимание на проблемы в общении. Это и так понятно и только напрягает обе стороны. Просто вежливо переспрашивайте.
На прошлой работе я посещал группу английского 2 года. Маленькие группы очень эффективны, потому что в них сильна конкуренция. И все же, начав работать удаленно, я обнаружил, что навыков недостаточно для эффективной коммуникации. Срочно занялся английским. Вот что мне помогло.
Первое – улучшить чтение. Приходится много читать. Документация, переписка, скайп, чаты в слаке. Бывает, сообщения сыпятся, а ты уже минуту одупляешь одну и ту же фразу. Помогла электронная читалка Kindle. Я читаю тех. литературу на английском на темы, которые хорошо знаю – Емакс, Лисп. Когда знаешь контекст, вопринимать легче. К тому же, в Киндле есть встроенные словари – увидел незнакомое слово, нажал, сразу вышло его транскрипция и толкование. Причем это не голимый гугл-транслейт, а оксфордский словарь с примерами.
Я не очень рекомендую бумажные книги, потому что неудобно лазить в словарь и переводчик. Особенно в худ. литературе много прилагательных, смысл которых можно смутно понять из контекста, но все равно они останутся неясными мозгу, пока не посмотришь в словаре.
Второе и гораздо более важное – улучшить восприятие на слух. У людей может быть самый разный акцент. Кого-то я хорошо понимаю, кого-то нет. По иронии судьбы, хуже всего понимаю начальника, а с этим шутки плохи. Чтобы улучшить восприятие, я рекомендую смотреть эти короткие видео.
Каждое видео длится три минуты. Смотреть нужно по следующей схеме:
смотрим и читаем бегущую стоку. Важно, чтобы все слова были понятны.
Слушаем снова, но не читаем.
Слушаем на скорости 1.25. Если что-то не понятно, повторяем или идем на предыдущий шаг.
Аналогично, только на скорости 1.5.
Аналогично, только на скорости 2. Это максимальная скорость. На ней лучше прослушать несколько раз.
Как вы догадались, секрет состоит в постепенном нарастании скорости. Поскольку с каждым просмотром нарастает контекст, мозгу становится легче воспринимать голос.
Скорость можно менять только в браузере, айпад и айфон эту возможность не поддерживают. На одно занятие уходит 15 минут. Делаем так строго каждый день утром, как только садимся за комп, вместо смешных картинок или Фейсбука. Эффект станет заметен буквально после двух занятий. Включите любую популярную песню и убедитесь, что стали воспринимать больше слов, чем раньше.
Деньги
Закончим про английский, вернемся к удаленке. Поговорим подробней о деньгах.
Зарплата из-за рубежа выше. Грубо говоря, зарплата мидла в Европе равна зарплате синьора в России. Или даже больше. Эта разница вызвана следующими факторами:
Жизнь в Европе дороже, там очень высокие налоги. Треть или даже 40% дохода европейца уходит на налоги.
Удаленный сотрудник не защищен профсоюзами. Это покажется странным, но на западе человека не могут уволить месяцами, потому что трудовой договор сильно защищает работника. Еще есть профсоюзы. Удаленщика никто не защищает. По контракту фирма предупреждает за неделю и давай до свидания.
Денежные переводы, которые фирма отправляет сотруднику, не облагаются налогами. Для фирмы это как купить мешок картошки. Из пункта о величине налогов следует экономия бюджета в 40%.
Из-за разницы в ценах между странами заказчик покупает синьора по стоимости мидла-джуна. Это устраивает обе стороны.
Кто-то может сказать, что ты не стоишь этих денег, потому что просто играешь на разнице цен. Но подобно тому, как вода движется от высокого давления к низкому, экономика работает на разнице цен. На моем ноуте написано “Designed in California, assembled in Malaysia”. Одну работу выгодней делать в одной части земного шара, другую – в другой.
Работа на Европу или США – это законный долларовый доход. Сегодня в России абсолютно все зависит от цены доллара – от продуктов до техники.
До введения санкций я покупал Макбук за 65тр, теперь он стоит 120тр. Шоколадный батончик стоил 20 рублей, стал 41. Наша семья побтребляет много молока – раньше брал за 35 рублей, однако с введением продуктового эмбаго качество молока, сыра, масла резко ухудшилось. Приходится покупать молоко за 65 рублей. Словом, в современной России рублевый доход – очень шаткая штука.
Для получения валюты нужен статус ИП с валютным счетом. Сейчас есть МФЦ, где все делается быстро, буквально неделя. Подготовить бумаги помогу сервисы вроде “Мое дело” и “Контур Эльба”. Они же составят график выплат взносов, налогов и многое другое.
Я работаю в Эльбе. Первый год бесплатен, потом 4 тр в год.
Есть разные системы налогооблажения, самая простая УСН – 6 % с доходов. Каждый квартал платите 6 процентов с того, что заработали. Своевременно вносите в Эльбу заработанные суммы, система сама расчитает налог. Суммы в долларах автоматом переводятся в рубли по тому курсу ЦБ, который действовал на момент ее получения. Ставить неверную дату, чтобы снизить курс с целью уменьшить налог нельзя – это противозаконно.
Получать валюту физлицу либо нельзя, либо можно, но с большим геморроем. Я даже не стал разбираться в тонкостях.
Когда вам присылают валюту, она падает на транзитный счет. Вы не можете снять ее. Нужно предоставить пакет документов, которые подтверждают законность перевода. Состав документов зависит от банка. В моем случае – это 7 документов, которые я должен предоставлять каждый месяц.
Есть такое понятие как паспорт сделки, когда показываете документы один раз. Их фиксируют в некую юридическую сущность, а вы ссылаетесь на нее многократно.
Существует система Payoneer, которая создает вам счет в американском банке и присылает карточку для снятия денег. Я ни разу с ней не работал, но вижу минусы в следующем:
счет принадлежит не вам, а фирме. В Америке с этим строго – счет открыть может только резидент. То, что на карте ваше имя – формальность, можно хоть имя собаки написать. Если счет заблокируют, вы ничего не догоните.
На каждую операцию – поступление на счет, конвертация, снятие – комиссии про пронципу
(N + %) USD. У ИП, если операции протекают внутри банка, комиссии гораздо ниже.
Я пользуюсь услугами известного банка. Хотя и был с ними досадный инцидент, услугами я доволен. Любая операция доступна в личном кабинете. Сайт работает только в Файерфоксе, интерфейс очень кривой, нужна Джава, но это мелочи.
Налоги
Вам придется платить налоги уже после получения денег, иногда это могут быть крупные суммы. Расставаться с ними уже псле получения трудно. Не удивительно, что человек, который платит налоги сам, становится более требовательным к себе и окружающим. Я заметил это по некоторым блоггерам, которые работают как ИП. Стали ИП – начали писать про политику, налоги, социальные вопросы.
Ни в коем случе нельзя уходить от уплаты налогов, да еще с зарубежных переводов. Будет бо-бо, вплоть до тюрьмы.
Еще есть патент. Вы оформляете патент на вид деятельности, и сумма налога станвится фиксированной. Эльба умеет работать с патентом. Получать его просто — качаете заявление, заполняете и несете в налоговую по месту прописки.
Стоимость патента зависит от региона, уточняйте на сайте Налоговой службы.
Налоговые органы могут ученить проверку. Информация о том, кого будут проверять, выкладывается на официальных сайтах. Эльба трекает эти сайты и подскажет, если в отношении вас что-то планируют.
Как жить с удаленной работой
Главное в удаленке – не остаться в изоляции. Есть риск деградировать. Нельзя замыкаться, ищите единомышленников.
Хорошей альтернативой может стать коворкинг или другое место, где собираются около-айтишные люди. Там, среди разного сброда, можно найти действительно полезные связи.
Должно быть место, куда вы добираетесь, чтобы работать. Это место проводит грань – вот я на рабаоте, вот я дома. Иначе становится трудно отделить работу от семьи. Ты встал из-за стола, а коллеги все пишут и пишут в скайп. Ты отвечаешь, хотя вроде решил, что на сегодня хватит…
Иногда работа размазывается по всему дню, с этим тоже надо уметь жить. Я работаю до обеда из коворкинга, а после обеда из дома, потому что вожу и забираю из школы сына. Приходится быстро адаптироваться к условиям. Например, пока сын плавает в бассейне, быдлокодю в фойе. Потом идем в парк, сын катается на велике, а я быдлокодю на лавочке. И все это – без раскачки, с полным контекстом в голове.
Первое время после увольнения будет не хватать людей, общения, прежних связей. Выходом стали регулярные встречи на айтишные темы. Мы с коллегами организовали “Глубокий рефакторинг”. Раз в месяц мы рассказываем всякие интересные штуки, шутим, троллим друг друга.
О прокрастинации
В чате были вопросы о том, как бороться с прокрастинацией. Например, как быстро набирать контекст, приступать к делу без раскачки.
Минимальные требования – избавиться от вредных привычек (курение и алкоголь), делать зарядку, ложиться рано, не сидеть по поздна за компом. Ходить пешком и немного заниматься спортом, скажем, хотя бы раз в неделю что-то посещать – зал, бассейн.
Сократить число отвлекающих факторов. Отключить нотификации в месенджерах, всяких скайпах-слаках. Оставить только самые важные, например, от конкретных людей. Чаты со смешными картинками поставить на мьют.
Слушать музыку без слов, а лучше всего шумы природы, океана. Под них очень классно работать.
Не пользоваться соцсетями, читать только RSS в Feedly или другом клиенте. Не шариться просто так по сайтам. Использовать текстовые редакторы вместо ИДЕ, плоский текст вместо бинарных файлов или конфигов. Вместо облачных хранилищ подойдет приватный репозиторий на Гитхабе.
Если прокрастинация имеет место, не надо делать трагедии. Стоит тушить комп и делать паузу минут на 15. Полезно пройтись после работы. Не смотрите телевизор и сериалы, сократите информационный поток, откажитесь от чего-то лишнего.
Другое
Среди прочего я нелестно высказался в чате на тему скриптовых языков, Руби и Питона в частности. Извиняюсь, если кого-то обидел. На эту тему будет отдельный пост.
Туманные рассуждения о преимуществах Лиспа и Гоу – там же.
После моего прощания в чате высказались другие удаленщики. Их тоже интересно почитать.
UPDСсылка на дамп чата. Через несколько дней размещу в блоге структурированный текст.
Сегодня во вторник в 18:00 по Москве состоиться слаконар на тему удаленной работы. Слаконар – это конференция в Слаке в текстовом режиме. Общаться будем на канале #general образовательного проекта Хекслет.
Я расскажу о тонкостях удаленной работы примерно по такому плану:
что такое удаленка и как я пришел в нее,
в чем отличие от фриланса,
виды удаленки, сравнение удаленной работы в России и в других странах,
удаленная работа на иностранные компании,
плюсы и минусы такой работы,
как найти удаленную работу и как ее легально оформить.
Общение займет ориентировочно 2 часа. Позже дамп беседы выложат в архивную Вики, а я оформлю в блоге отдельным постом.
Заходите, будет интересно.
Наткнулся на интересную статью одного психотерапевта. Говорит грамотные вещи, хорошо причищает мозги. Позволю себе процитировать самое интересное с комментариями.
О детях:
Ребенка надо выращивать, а не воспитывать. Воспитание – это ограничение. И самое тяжкое ограничение – это запрет мыслить. Потому что если ребенок начнет делать что-то свое, непонятное учителям, они ему поставят двойку. Вот я помог младшему сыну решить задачу по математике. А сын потом говорит, неправильно, у нас учительница не так решала. И в следующий раз он будет уже не задачу решать, а угадывать, что нужно учителю.
Подпишусь кровью. Учителей, которые заворачивают домашнюю работу за то, что “решено не так”, надо вешать на столбах, чтоб неповадно было.
О праздниках:
Или когда у человека безрадостная жизнь, он начинает праздники праздновать. А это не по природе. По природе каждый день должен быть праздником.Например, празднование Нового года. Оно обходится человеку в 100 тыс. долларов как минимум. Только я считаю не из того, сколько он сейчас зарабатывает, а сколько хотел бы и мог. Например, нормальный заработок психотерапевта в час – это 100 долларов. Но его можно достигнуть не сразу, в 18 лет, а в 25. Но если люди две недели до Нового года черте чем занимаются, две недели после, то и не дорастают до самих себя, конечно.
Праздников в Российском календаре слишком много. Это особенно заметно, когда начинаешь работать с Европой или США. Я не могу передать, как бесит перестановка рабочего времени из-за праздника. Например, работаем в субботу, чтобы отдыхать в понедельник. Очевидно же, что никто в субботу работать не будет. Биоритмы! Ну, а праздники с датами в названиях, вроде 8 Марта, 23 Февраля, 1, 9 Мая – это такой совок, что стыд. В этому году жена говорит – не поздравляй меня с 8 Марта, лучше в любой другой день, потому что это так уныло. А я и сам хотел об этом сказать. Так что теперь мы их не празднуем.
Про труд и любовь:
Элементарно просто. Интересный труд и любовь. Причем любовь на втором месте. У нас от любви ждут больше, чем она может дать. Вот я дом построил. Электричество есть, водопровод, отопление, канализация. Все, можно жить. Но нет обоев. А люди думают, что обои – это самое главное. Образно выражаясь, дерьма в голове много, и это дерьмо управляет поведением.
Интересный труд – это то, что реализует человека. Не случайно же Маслоу задвинул реализацию на самый верх. Человек, одержимый любимым делом, любит всех и любим всеми, даже если он по натуре засранец, как Стив Джобс.
О личностном и карьерном росте:
Нужно стать профессионалом экстра-класса. Тогда автоматически станешь полезным другим. Это еще Адам Смит говорил. Я вот книжки пишу. Рассказываю, как лучше жить, удобнее. Результаты есть – многие мои ученики круто вверх поднялись. Можно, оказывается, и в наших условиях успехов добиться. А когда перестал Дни рождения и новый год праздновать, хоть бы кто от общения со мной отказался.
Проще говоря, нужно не только накапливать знания, но и делиться ими.
О деньгах:
Когда человек приходит с любовной тягомотиной, я спрашиваю, сколько ты зарабатываешь. Научись зарабатывать пять тысяч долларов, а потом занимайся решением семейных проблем.
Тоже верно. Нам втирают, что деньги не главное, однако, деньги решают 90% всех проблем. Если взять типичную несчастную семью и начать копать, все упрется в стеснение в средствах.
О состоятельных женихах:
Идиотки. Вот пример приведу. У нас один ростовский олигарх женился на парикмахерше, снял ее с работы. А потом его застрелили. Ей 25 лет и пятилетний ребенок. Так я наблюдал, как она через несколько лет по миру пошла. Была бы обучена – удержала бы состоянием. А он говорил, зачем, я сам заработаю.
Таких печальных историй со знакомыми из детства я знаю не одну. Находят пузатых бандитов, купаются в шоколаде. Потом мужа сажают, убивают или тупо отжимают бизнес. Жене остается или выживать, или идти на сексуальное содержание к другому бизнесмену.
О проституции и морали:
Мужчина и женщина – это только когда речь идет о сексе и участии в деторождении. А когда речь идет о деле, тут кто лучше делает. У нас же женщин воспитывают в стиле латентной проституции. Она ищет мужчину, который бы ее защищал, кормил, жил для нее. А сама она может быть никем. Как это назвать? Мне обычная проституция кажется более честной, когда проститутка отрабатывает технику секса, получает деньги и не навязывается мужчине в постоянные спутницы жизни.
О сумашествии:
Это еще Ницше сказал, что сумасшествие единиц – исключение, сумасшествие масс – правило. Есть психотические расстройства – бред, галлюцинации, нелепое поведение. Таких сумасшедших немного, где-то шесть на тысячу.
Без комментариев, просто посмотрите выпуск новостей на ОРТ.
О тех, у кого все виноваты:
Этого я не знаю. Я не президент. Но к молодому и здоровому человеку, если у него что-то не получается, у меня никакого сострадания нет. Пойди на оптовый рынок, купи и потом продавай. Или на машине таксуй.
Занимательно, что столь интересный материал опубликован а бабском сайте, где все сплошь про макияж и фен-шуй. Бывает же.
Понравилась структура. Примерно 30 глав – атомов – по разным аспектам языка. Автор – плюсовик со стажем, по ходу действия дает полезные советы, объясняет ООП-паттерны, затрагивает ФП.
Ближе к концу серия глав о монадах. Описаны
Either,Maybe (Option),Try. Понравилось, что автор ни разу не употребил термин “монада”. Это камень в огород Хаскела, где монады начинаются с аппликативных функторов.
К книжке прилагается набор примеров с тестами, доками. Все добротно и на совесть, лежит на Гитхабе. Автор очень ответственно подошел к делу. По сути, у него не просто книга, а сообщество вокруг нее. Покупка книги – своего рода членский билет.
Я немного повозился с языком и могу сказать следующее. Прошу прощения за дилетантский взгляд, все пункты ниже могут оказаться ложью и провокацией.
Сперва язык кажется очень простым, вроде Гоу или Питона, только с типизацией. Примерно к середине книги понимаешь, что все не так просто.
Некоторые элементы дизайна очень странны. Например, составные объекты, когда объявлены класс и объект с одинаковыми именами. Я долго ломал голову, пока наконец понял, что таким образом формируется метакласс. Можно было сделать удобнее.
Избыточные модификаторы вроде
sealed,implict. Мне кажется, их впихнули задним числом, получилось не очень. Скала вроде стремится в минимализму, а получается опять Джава.
Самая жесть с параметрическими трейтами. Напоминает Хаскел, но очень неуклюже. В книге есть пример как построить модель рецепта мороженного. 100 строк на декларацию трейтов, затем дикий микс из них. В конце автор пишет, что вот как хорошо получилось. Я бы убился поддерживать это мороженное.
Скала все-таки ООП-язык, хоть и с функциональным уклоном. Насколько я понял, в экосистеме Скалы не поощрается процедурно-модульное программирование. Получается
yet another oop language.
Hieronymus Bosch “A visual guide to the Scala language” oil on oak
panels, 1490-1510.
The left panel shows the functional features, the main one describes
the type system, and the right the object oriented parts.
Все же, интерес к языку у меня возник, и при случае был бы рад принять участие в проекте на нем.
Выборочные цитаты из книги с моим вольным переводом.
О ручной итерации:
Notice that
mapandreducetake care of the iteration code that you normally write by hand. Although managing the iteration yourself might not seem like much effort, it’s one more error-prone detail, one more place to make a mistake (and since they’re so “obvious,” such mistakes are particularly hard to find).Обратите внимание,mapиreduceсами заботятся об итерации. В императивном подходе итерация ложится на ваши плечи. Это кажется пустяком, но увеличивает код и вероятность ошибки. Баг в ручной итерации замечаешь в последнюю очередь.
О комментариях:
Comments should add new information that isn’t obvious from reading the code. If the comments just repeat what the code says, it becomes annoying (and people start ignoring your comments). When the code changes, programmers often forget to update comments, so it’s a good practice to use comments judiciously, mainly for highlighting tricky aspects of your code.
Комментарии должны дополнять код, объяснять неочевидные моменты. Раздражает, когда комментарий повторяет в точности то, что делает код. Люди проигнорируют их. Когда код меняется, программист часто забывает обновить комментарий. Комментируйте вдумчиво, в основном чтобы объяснить неочевидные приемы в коде.
О паттерн-матчинге:
Notice that pattern matching can overlap with the functionality of if statements. Because pattern matching is more flexible and powerful, we prefer it over if statements when there’s a choice.
Замечу, что паттер-матчинг похож на условное выражение. Первый более гибок, поэтому будем использовать его вместо условия при удобном случае.
В целом, книжка зачет, не жалею потраченного времени.
|
A Python Tutorial, the Basics
ð A very easy Python Tutorial! ð
#Tutorial Jam
@elipie's jam p i n g
p i n g
Here is a basic tutorial for Python, for beginners!
Table of Contents:
1. The developer of python
2. Comments/Hashtags
3. Print and input statements
f' strings
4. If, Elif, Else statements
5. Common Modules
1. Developer of Python
It was created in the late 1980s by Guido van Rossum in the Netherlands. It was made as a successor to the ABC language, capable interfacing with the Ameoba operating system. It's name is python because when he was thinking about Python, he was also reading, 'Monty Python's Flying Circus'. Guido van Rossum thought that the langauge would need a short, unique name, so he chose Python.
For more about Guido van Rossum, click here
2. Comments/Hastags
Comments are side notes you can write in python. They can be used, as I said before:
sidenotes
instructions or steps
etc.
How to write comments:
#This is a comment
The output is nothing because:
It is a comment and comments are invisible to the computer
Comments are not printed in Python
So just to make sure, hastags are used to make comments. And remember, comments are ignored by the computer.
3. Print and Input statements
1. Print Statements
Print statements, printed as print, are statements used to print sentences or words. So for example:
print("Hello World!")
The output would be:
Hello World!
So you can see that the print statement is used to print words or sentences.
2. Input Statements
Input statements, printed as input, are statements used to 'ask'. For example:
input("What is your name?")
The output would be:
What is your name?
However, with inputs, you can write in them. You can also 'name' the input. Like this:
name = input("What is your name?")
You could respond by doing this:
What is your name? JBYT27
So pretty much, inputs are used to make a value that you can make later.
Then you could add a if statement, but lets discuss that later.
3. f strings
f strings, printed as f(before a qoutation), is used to print or input a value already used. So what I mean is, say I put an f string on a print statement. Like this:
print(f"")
The output right now, is nothing. You didn't print anything. But say you add this:
print(f"Hello {name}!")
It would work, only if the name was named. In other words, say you had a input before and you did this to it:
name = input()
Then the f string would work. Say for the input, you put in your name. Then when the print statement would print:
Hello (whatever your name was)!
Another way you could do this is with commas. This won't use an f string either. They are also simaler. So how you would print it is like this:
name = input()
...
print("Hello ", name, "!")
The output would be the same as well! The commas seperate the 2 strings and they add the name inside. But JBYT27, why not a plus sign? Well, this question you would have to ask Guido van Rossum, but I guess I can answer it a bit. It's really the python syntax. The syntax was made that so when you did a plus sign, not a comma, it would give you an error.
Really, the only time you would use this is to give back your name, or to find if one value was equal to each other, which we'll learn in a sec.
4. If, Elif, Else Statements
1. If Statements
If statements, printed as if, are literally as they are called, if sentences. They see if a sentence equals or is something to an object, it creates an effect. You could think an if statement as a cause and effect. An example of a if statement is:
name = input("What is your name?")
#asking for name
if name == "JBYT27":
print("Hello Administrator!")
The output could be:
What is your name? JBYT27Hello Administrator!
However, say it isn't JBYT27. This is where the else, elif, try, and except statements comes in!
2. Elif Statements
Elif statements, printed as elif are pretty much if statements. It's just that the word else and if are combined. So say you wanted to add more if statements. Then you would do this:
if name == "JBYT27":
print("Hello Administrator!")
elif name == "Code":
print("Hello Code!")
It's just adding more if statements, just adding a else to it!
3. Else Statements
Else statments, printed as else, are like if and elif statements. They are used to tell the computer that if something is not that and it's not that, go to this other result. You can use it like this (following up from the other upper code):
if name == "JBYT27":
print("Hello admin!")
elif name == "Squid":
print("Hello Lord Squod!")
else:
print(f"Hello {name}!")
5. Common Modules
Common modules include:
os
time
math
sys
replit
turtle
tkinter
random
etc.
So all these modules that I listed, i'll tell you how to use, step by step! ;) But wait, what are modules?
Modules are like packages that are pre-installed in python. You just have to fully install it, which is the module(please correct me if im wrong). So like this code:
import os
...
When you do this, you successfully import the os module! But wait, what can you do with it? The most common way people use the os module is to clear the page. By means, it clears the console(the black part) so it makes your screen clear-er. But, since there are many, many, many modules, you can also clear the screen using the replit module. The code is like this:
import replit
...
replit.clear()
But one amazing thing about this importing is you can make things specific. Like say you only want to import pi and sqrt from the math package. This is the code:
from math import pi, sqrt
Let me mention that when you do this, never, ever add an and. Like from ... import ... and .... That is just horrible and stupid and... Just don't do it :)
Next is the time module
You can use the time module for:
time delay
scroll text
And yeah, that's pretty much it (i think)
Note:
All of the import syntax is the same except for the names
Next is tkinter, turtle
You can use the tkinter module for GUI's (screen playing), you can import it in a normal python, or you can do this in a new repl.
You can use the turtle for drawing, it isn't used much for web developing though.
The math and sys
The math is used for math calculations, to calculate math. The sys is used for accessing used variables. I don't really know how I could explain it to you, but for more, click here
Random
The random module is used for randomizing variables and strings. Say you wanted to randomize a list. Here would be the code:
import random
...
a_list = ["JBYT27","pie","cat","dog"]
...
random.choice(a_list)
The output would be a random choice from the variable/list. So it could be pie, JBYT27, cat, or dog. From the random module, there are many things you can import, but the most common are:
choice
randrange
etc.
And that's all for modules. If you want links, click below.
Links for modules:
And that's it!
Hooray! We made it through without sleeping!
Credits to:
Many coders for tutorials
Books and websites
replit
etc.
Links:
Web links:
ranging from a few days or hoursifyoulikereading
Video links:
ranging from 1-12 hoursifyoudon'tlike reading
Otherwise:
ranging from 5 hours-a few daysreplittutorial links
I hope you enjoyed this tutorial! I'll cya on the next post!
stay safe!
You are viewing a single comment. View All
|
문ì ì¤ëª
JadenCaseë 모ë ë¨ì´ì 첫 문ìê° ë문ìì´ê³ , ê·¸ ì¸ì ìíë²³ì ì문ìì¸ ë¬¸ìì´ì
ëë¤. 문ìì´ sê° ì£¼ì´ì¡ì ë, s를 JadenCaseë¡ ë°ê¾¼ 문ìì´ì 리í´íë©´ ë©ëë¤.
ì íì¡°ê±´
së ê¸¸ì´ 1 ì´ìì¸ ë¬¸ìì´ì
ëë¤.
së ìí벳과 공백문ì(" ")ë¡ ì´ë£¨ì´ì ¸ ììµëë¤.
첫 문ìê° ìë¬¸ì´ ìëëìë ì´ì´ì§ë ì문ì ì문ìë¡ ìëë¤. ( 첫ë²ì§¸ ì
ì¶ë ¥ ì ì°¸ê³ )
ìì
"3people unFollowed me"ì´ ì
ë ¥ëìì ë -> "3people Unfollowed Me"ê° ë°í
"for the last week"ê° ì
ë ¥ëìì ë -> "For The Last Week
"ê° ë°í
문ì í´ì¤
ì 근방ì
--
ì°ì ë문ìê° ëì´ìë ë¶ë¶ì ì문ìíí´ì¼í´ì 미리 ì문ìë¡ ë¤ ë§ë¤ì´ì£¼ê³ , ëì´ì°ê¸° ë¨ìë¡ ë¨ì´ë¥¼ ëë ë¤ì, ê·¸ ë¨ì´ë³ë¡ ìì리를 ë문ìíí´ì¤ëë¤.
def solution(s):
s = s.lower()
L=s.split(" ")
answer = ""
for i in L:
i= i.capitalize()
answer+= i+" "
return answer[:-1]
ë°ì s를 ì ë¶ ì문ìë¡ ë§ë¤ì´ì¤ëë¤.
splití¨ì를 ì´ì©í´ì ëì´ì°ê¸° ë¨ìë¡ L리ì¤í¸ì ë¨ì´ë¥¼ ë£ì´ì¤ëë¤.
ë°ë³µë¬¸ì ì´ì©í´ì ë¨ì´ë³ë¡ ìê¸ìë§ ë문ìíí´ì£¼ë capitalizeí¨ì를 ì¬ì©í´ì¤ëë¤.
ê·¸ë ê² ìê¸ìë§ ë문ìì¸ ë¨ì´ë¥¼ ëì´ì°ê¸°ë¥¼ í´ì ë¶ì¬ì¤ëë¤.
ì´ë ê² ê·¸ëë¡ ì¶ë ¥íë©´ ê¸ì ë§ì§ë§ì ëì´ì°ê¸°ê° ì기기 ë문ì ì¬ë¼ì´ì±ì í´ì ê¸ì ë§ì§ë§ ëì´ì°ê¸° ì´ì ê¹ì§ ë°íí©ëë¤.
ì±ì ê²°ê³¼
|
--- title: 折腾笔记:在GameShell上编译Godot游戏
折腾笔记:在GameShell上编译Godot游戏
2020/12/1
一切都源于某天深夜无聊逛淘宝说起。
偶然间看到了Retro Game、开源掌机,Hack之心燃起。
然而很多名为开源掌机,实际上只是拿来玩模拟器游戏。
外观设计也很土味😜
GameShell
可能是淘宝的推荐算法有所长进吧,偶然间发现一个叫GameShell的产品。
无论是外观、设计,还是类树莓派的架构(感觉就是魔改的树莓派)。
对于一个常年从事软件开发的人,给树莓派装个电池我都无从下手😭
这样的产品比开发板门槛低,可以带出门,像正常的手机那样充电。
基本上就是一个Portable的树莓派。
仔细想想,如果在这个上面做游戏,参加Game Jam时看起来肯定很有趣。
关于在GS上做游戏
这是我拿到GS后的第一个问题,我试了PyGame(SDL封装),LÖVE...
不得不说常年的Unity开发经验,让我习惯这两个工具。
于是得出了一个短暂的结论,虽然GS是一个很好的设备,但在上面做游戏不太容易。
接下来GS开始吃灰,大概吃了几个月,发现闲鱼也出不掉...
被逼无奈开始重新思考它的功能😭
关于Godot
偶然间了解到Godot更新了3.x,虽然是因为Unity开发久了想了解下其他的引擎。
但最后却发现,这个引擎似乎比Unity更开放,且可以移植到树莓派上。
进一步就发现,这个可以在GS上做游戏 详见GS官方社区讨论帖。
摸索后搞出成果。
第一步,生成Custom Template
该部分参考自 godot_3.1_cpi。
首先Godot的Export相当于Unity的Build,就是导出可玩的游戏程序的部分。
而Custom Template是针对特定的架构、系统而生成的「依赖构建模板」(我是这么猜测的)。
通过生成模板,你可以在macOS上生成Windows游戏,或是ARM CPU的Linux游戏。
具体步骤大概如下:
git clone https://github.com/godotengine/godot.git
cd godot
# 这一步是生成Custom Template用的,最终生成文件会在 godot/bin 目录
# 参数 -j12 即 12 个 Job ,use_llvm是必须的,platform、target、arch看名知意
scons platform=x11 target=release arch=armhf tools=no use_llvm=yes -j12
最终生成的文件将在 godot/bin 文件夹下,文件名大概为:godot.x11.opt.armhf.llvm
第二步,在Godot中导出游戏
这里可以随便下载一个Godot的模板游戏,推荐2D Platform。
接下来在Godot游戏中增加退出游戏的功能(这一步可选,只是GS程序一般靠「MENU键」返回主菜单)。
「MENU键」对应的是键盘Escape键。
代码中可以添加:
func _process(delta):
if Input.is_action_just_pressed("ui_escape"):
get_tree().quit()
这里将 godot.x11.opt.armhf.llvm 拷贝出来并在导出时制定给 Custom Template。
第三步,Test Flight
将生成出来的.pck和.x86文件拷贝至GameShell。
速度慢的小伙伴可以考虑USB-Ethernet。
接下来在 /home/cpi/launcher/Menu/GameShell 目录扩展GS的主菜单。
在该目录下创建一个Shell文件名大概名为:10_MyGame.sh
PS:该文件名我大概猜测前面的数字用于排序,后面的英文用于标题显示。
#!/bin/bash
exec /你的游戏路径/游戏名.x86
重新加载GS主界面,点击你的游戏图标后,大功告成。
如果Godot不支持显卡驱动
如果Godot提醒你显卡不支持之类的问题,建议使用「FBTURBO driver」。
目前尚不清楚为什么LIMA Driver不会被Godot识别(GS的GPU是MALI)。
然而软件渲染的确是比较感人😭 首先天空盒就不能用啦。
个人建议Godot在GS上目前还是开发2D游戏合适。
不然3D游戏光是加载速度就让人抓狂了。
By 鱆
|
在UI编程中,常常要对鼠标点击进行相应,首先如何获得鼠标点击呢?
缓存 I/O 又被称作标准 I/O,大多数文件系统的默认 I/O 操作都是缓存 I/O。在 Linux 的缓存 I/O 机制中,操作系统会将 I/O 的数据缓存在文件系统的页缓存( page cache )中,也就是说,数据会先被拷贝到操作系统内核的缓冲区中,然后才会从操作系统内核的缓冲区拷贝到应用程序的地址空间。用户空间没法直接访问内核空间的,内核态到用户态的数据拷贝
思考:为什么数据一定要先到内核区,直接到用户内存不是更直接吗?
缓存 I/O 的缺点:
数据在传输过程中需要在应用程序地址空间和内核进行多次数据拷贝操作,这些数据拷贝操作所带来的 CPU 以及内存开销是非常大的。
同步(synchronous) IO和异步(asynchronous) IO,阻塞(blocking)IO和非阻塞(non-blocking)IO分别是什么,到底有什么区别?这个问题其实不同的人给出的答案都可能不同,比如wiki,就认为asynchronousIO和non-blockingIO是一个东西。这其实是因为不同的人的知识背景不同,并且在讨论这个问题的时候上下文(context)也不相同。所以,为了更好的回答这个问题,我先限定一下本文的上下文。
本文讨论的背景是Linux环境下的network IO。
Stevens在文章中一共比较了五种IO Model:
由于signal driven IO在实际中并不常用,所以我这只提及剩下的四种IOModel。
再说一下IO发生时涉及的对象和步骤。
对于一个network IO(这里我们以read举例),它会涉及到两个系统对象,一个是调用这个IO的process(orthread),另一个就是系统内核(kernel)。当一个read操作发生时,它会经历两个阶段:
1 等待数据准备 (Waiting for the data to be ready)
2 将数据从内核拷贝到进程中 (Copying the data from the kernel to theprocess)
记住这两点很重要,因为这些IO Model的区别就是在两个阶段上各有不同的情况。
(1)每收到一个请求,创建一个新的进程,来处理该请求;
(2)每收到一个请求,创建一个新的线程,来处理该请求;
(3)每收到一个请求,放入一个事件列表,让主进程通过非阻塞I/O方式来处理请求
为了控制进程的执行,内核必须有能力挂起正在CPU上运行的进程,并恢复以前挂起的某个进程的执行。这种行为被称为进程切换,这种切换是由操作系统来完成的。因此可以说,任何进程都是在操作系统内核的支持下运行的,是与内核紧密相关的。
从一个进程的运行转到另一个进程上运行,这个过程中经过下面这些变化:
保存处理机上下文,包括程序计数器和其他寄存器。
更新PCB信息。
把进程的PCB移入相应的队列,如就绪、在某事件阻塞等队列。
选择另一个进程执行,并更新其PCB。
更新内存管理的数据结构。
恢复处理机上下文。
注:总而言之就是很耗资源的
linux下,可以通过设置socket使其变为non-blocking。当对一个non-blocking socket执行读操作时,流程是这个样子:
当用户进程发出read操作时,如果kernel(内核)中的数据还没有准备好,那么它并不会block用户进程,而是立刻返回一个error。从用户进程角度讲,它发起一个read操作后,并不需要等待,而是马上就得到了一个结果。用户进程判断结果是一个error时,它就知道数据还没有准备好,于是它可以再次发送read操作。一旦kernel中的数据准备好了,并且又再次收到了用户进程的systemcall,那么它马上就将数据拷贝到了用户内存,然后返回。
所以,nonblocking IO的特点是用户进程需要不断的主动询问kernel数据好了没有。
通常,写服务器处理模型的程序时,有以下几种模型:
目前大部分的UI编程都是事件驱动模型,如很多UI平台都会提供onClick()事件,这个事件就代表鼠标按下事件。事件驱动模型大体思路如下:
最初的问题:怎么确定IO操作完了切回去呢?通过回调函数
1.要理解事件驱动和程序,就需要与非事件驱动的程序进行比较。实际上,现代的程序大多是事件驱动的,比如多线程的程序,肯定是事件驱动的。早期则存在许多非事件驱动的程序,这样的程序,在需要等待某个条件触发时,会不断地检查这个条件,直到条件满足,这是很浪费cpu时间的。而事件驱动的程序,则有机会释放cpu从而进入睡眠态(注意是有机会,当然程序也可自行决定不释放cpu),当事件触发时被操作系统唤醒,这样就能更加有效地使用cpu.2.再说什么是事件驱动的程序。一个典型的事件驱动的程序,就是一个死循环,并以一个线程的形式存在,这个死循环包括两个部分,第一个部分是按照一定的条件接收并选择一个要处理的事件,第二个部分就是事件的处理过程。程序的执行过程就是选择事件和处理事件,而当没有任何事件触发时,程序会因查询事件队列失败而进入睡眠状态,从而释放cpu。3.事件驱动的程序,必定会直接或者间接拥有一个事件队列,用于存储未能及时处理的事件。4.事件驱动的程序的行为,完全受外部输入的事件控制,所以,事件驱动的系统中,存在大量这种程序,并以事件作为主要的通信方式。5.事件驱动的程序,还有一个最大的好处,就是可以按照一定的顺序处理队列中的事件,而这个顺序则是由事件的触发顺序决定的,这一特性往往被用于保证某些过程的原子化。6.目前windows,linux,nucleus,vxworks都是事件驱动的,只有一些单片机可能是非事件驱动的。
注意,事件驱动的监听事件是由操作系统调用的cpu来完成的
调用blocking IO会一直block住对应的进程直到操作完成,而non-blocking IO在kernel(内核)还准备数据的情况下会立刻返回。
2、synchronous IO和asynchronous IO的区别
两者的区别就在于synchronous IO做”IO operation”的时候会将process阻塞。按照这个定义,之前所述的blocking IO,non-blocking IO,IO multiplexing都属于synchronous IO。
有人会说,non-blocking IO并没有被block啊。这里有个非常“狡猾”的地方,定义中所指的”IO operation”是指真实的IO操作,就是例子中的recvfrom这个system call。non-blocking IO在执行recvfrom这个system call的时候,如果kernel的数据没有准备好,这时候不会block进程。但是,当kernel中数据准备好的时候,recvfrom会将数据从kernel拷贝到用户内存中,这个时候进程是被block了,在这段时间内,进程是被block的。
而asynchronous IO则不一样,当进程发起IO 操作之后,就直接返回再也不理睬了,直到kernel发送一个信号,告诉进程说IO完成。在这整个过程中,进程完全没有被block。
各个IO Model的比较如图所示:
很多程序员可能会考虑使用“线程池”或“连接池”。“线程池”旨在减少创建和销毁线程的频率,其维持一定合理数量的线程,并让空闲的线程重新承担新的执行任务。“连接池”维持连接的缓存池,尽量重用已有的连接、减少创建和关闭连接的频率。
上节的问题:
协程:遇到IO操作就切换。
但什么时候切回去呢?怎么确定IO操作完了?
很多程序员可能会考虑使用“线程池”或“连接池”。“线程池”旨在减少创建和销毁线程的频率,其维持一定合理数量的线程,并让空闲的线程重新承担新的执行任务。“连接池”维持连接的缓存池,尽量重用已有的连接、减少创建和关闭连接的频率。这两种技术都可以很好的降低系统开销,都被广泛应用很多大型系统,如websphere、tomcat和各种数据库等。但是,“线程池”和“连接池”技术也只是在一定程度上缓解了频繁调用IO接口带来的资源占用。而且,所谓“池”始终有其上限,当请求大大超过上限时,“池”构成的系统对外界的响应并不比没有池的时候效果好多少。所以使用“池”必须考虑其面临的响应规模,并根据响应规模调整“池”的大小。对应上例中的所面临的可能同时出现的上千甚至上万次的客户端请求,“线程池”或“连接池”或许可以缓解部分压力,但是不能解决所有问题。总之,多线程模型可以方便高效的解决小规模的服务请求,但面对大规模的服务请求,多线程模型也会遇到瓶颈,可以用非阻塞接口来尝试解决这个问题
传统的编程是如下线性模式的:
开始--->代码块A--->代码块B--->代码块C--->代码块D--->......--->结束
每一个代码块里是完成各种各样事情的代码,但编程者知道代码块A,B,C,D...的执行顺序,唯一能够改变这个流程的是数据。输入不同的数据,根据条件语句判断,流程或许就改为A--->C--->E...--->结束。每一次程序运行顺序或许都不同,但它的控制流程是由输入数据和你编写的程序决定的。如果你知道这个程序当前的运行状态(包括输入数据和程序本身),那你就知道接下来甚至一直到结束它的运行流程。
对于事件驱动型程序模型,它的流程大致如下:
开始--->初始化--->等待
与上面传统编程模式不同,事件驱动程序在启动之后,就在那等待,等待什么呢?等待被事件触发。传统编程下也有“等待”的时候,比如在代码块D中,你定义了一个input(),需要用户输入数据。但这与下面的等待不同,传统编程的“等待”,比如input(),你作为程序编写者是知道或者强制用户输入某个东西的,或许是数字,或许是文件名称,如果用户输入错误,你还需要提醒他,并请他重新输入。事件驱动程序的等待则是完全不知道,也不强制用户输入或者干什么。只要某一事件发生,那程序就会做出相应的“反应”。这些事件包括:输入信息、鼠标、敲击键盘上某个键还有系统内部定时器触发。
在UI编程中,常常要对鼠标点击进行相应,首先如何获得鼠标点击呢?
方式一:创建一个线程,该线程一直循环检测是否有鼠标点击,那么这个方式有以下几个缺点:
1.CPU资源浪费,可能鼠标点击的频率非常小,但是扫描线程还是会一直循环检测,这会造成很多的CPU资源浪费;如果扫描鼠标点击的接口是阻塞的呢?
2.如果是堵塞的,又会出现下面这样的问题,如果我们不但要扫描鼠标点击,还要扫描键盘是否按下,由于扫描鼠标时被堵塞了,那么可能永远不会去扫描键盘;
方式二:就是事件驱动模型
目前大部分的UI编程都是事件驱动模型,如很多UI平台都会提供onClick()事件,这个事件就代表鼠标按下事件。事件驱动模型大体思路如下:
事件驱动编程是一种编程范式,这里程序的执行流由外部事件来决定。它的特点是包含一个事件循环,当外部事件发生时使用回调机制来触发相应的处理。另外两种常见的编程范式是(单线程)同步以及多线程编程。
让我们用例子来比较和对比一下单线程、多线程以及事件驱动编程模型。下图展示了随着时间的推移,这三种模式下程序所做的工作。这个程序有3个任务需要完成,每个任务都在等待I/O操作时阻塞自身。阻塞在I/O操作上所花费的时间已经用灰色框标示出来了。
在单线程同步模型中,任务按照顺序执行。如果某个任务因为I/O而阻塞,其他所有的任务都必须等待,直到它完成之后它们才能依次执行。这种明确的执行顺序和串行化处理的行为是很容易推断得出的。如果任务之间并没有互相依赖的关系,但仍然需要互相等待的话这就使得程序不必要的降低了运行速度。
在多线程版本中,这3个任务分别在独立的线程中执行。这些线程由操作系统来管理,在多处理器系统上可以并行处理,或者在单处理器系统上交错执行。这使得当某个线程阻塞在某个资源的同时其他线程得以继续执行。与完成类似功能的同步程序相比,这种方式更有效率,但程序员必须写代码来保护共享资源,防止其被多个线程同时访问。多线程程序更加难以推断,因为这类程序不得不通过线程同步机制如锁、可重入函数、线程局部存储或者其他机制来处理线程安全问题,如果实现不当就会导致出现微妙且令人痛不欲生的bug。
在事件驱动版本的程序中,3个任务交错执行,但仍然在一个单独的线程控制中。当处理I/O或者其他昂贵的操作时,注册一个回调到事件循环中,然后当I/O操作完成时继续执行。回调描述了该如何处理某个事件。事件循环轮询所有的事件,当事件到来时将它们分配给等待处理事件的回调函数。这种方式让程序尽可能的得以执行而不需要用到额外的线程。事件驱动型程序比多线程程序更容易推断出行为,因为程序员不需要关心线程安全问题。
当我们面对如下的环境时,事件驱动模型通常是一个好的选择:
当应用程序需要在任务间共享可变的数据时,这也是一个不错的选择,因为这里不需要采用同步处理。
网络应用程序通常都有上述这些特点,这使得它们能够很好的契合事件驱动编程模型。
上面的事件驱动模型中,只要一遇到IO就注册一个事件,然后主程序就可以继续干其它的事情了,只到io处理完毕后,继续恢复之前中断的任务,这本质上是怎么实现的呢?
逻辑图:
在学完协程之后,了解到它最优也是解决IO操作的,那么俩个点、
在linux中,默认情况下所有的socket都是blocking,一个典型的读操作流程大概是这样:
当用户进程调用了recvfrom这个系统调用,kernel就开始了IO的第一个阶段:准备数据。对于networkio来说,很多时候数据在一开始还没有到达(比如,还没有收到一个完整的UDP包),这个时候kernel就要等待足够的数据到来。而在用户进程这边,整个进程会被阻塞。当kernel一直等到数据准备好了,它就会将数据从kernel中拷贝到用户内存,然后kernel返回结果,用户进程才解除block的状态,重新运行起来。
所以,blocking IO的特点就是在IO执行的两个阶段都被block了。
1.1、用户空间与内核空间
现在操作系统都是采用虚拟存储器,那么对32位操作系统而言,它的寻址空间(虚拟存储空间)为4G(2的32次方)。操作系统的核心是内核,独立于普通的应用程序,可以访问受保护的内存空间,也有访问底层硬件设备的所有权限。为了保证用户进程不能直接操作内核(kernel),保证内核的安全,操心系统将虚拟空间划分为两部分,一部分为内核空间,一部分为用户空间。针对linux操作系统而言,将最高的1G字节(从虚拟地址0xC0000000到0xFFFFFFFF),供内核使用,称为内核空间,而将较低的3G字节(从虚拟地址0x00000000到0xBFFFFFFF),供各个进程使用,称为用户空间。
1.2、进程切换
为了控制进程的执行,内核必须有能力挂起正在CPU上运行的进程,并恢复以前挂起的某个进程的执行。这种行为被称为进程切换。因此可以说,任何进程都是在操作系统内核的支持下运行的,是与内核紧密相关的。
从一个进程的运行转到另一个进程上运行,这个过程中经过下面这些变化:
更新PCB信息。
把进程的PCB移入相应的队列,如就绪、在某事件阻塞等队列。
总而言之就是很耗资源,具体的可以参考这篇文章:进程切换
注:进程控制块(Processing Control Block),是操作系统核心中一种数据结构,主要表示进程状态。其作用是使一个在多道程序环境下不能独立运行的程序(含数据),成为一个能独立运行的基本单位或与其它进程并发执行的进程。或者说,OS是根据PCB来对并发执行的进程进行控制和管理的。 PCB通常是系统内存占用区中的一个连续存区,它存放着操作系统用于描述进程情况及控制进程运行所需的全部信息
1.3、进程阻塞
正在执行的进程,由于期待的某些事件未发生,如请求系统资源失败、等待某种操作的完成、新数据尚未到达或无新工作做等,则由系统自动执行阻塞原语(Block),使自己由运行状态变为阻塞状态。可见,进程的阻塞是进程自身的一种主动行为,也因此只有处于运行态的进程(获得CPU),才可能将其转为阻塞状态。当进程进入阻塞状态,是不占用CPU资源的。
1.4、文件描述符fd
文件描述符(File descriptor)是计算机科学中的一个术语,是一个用于表述指向文件的引用的抽象化概念。
文件描述符在形式上是一个非负整数。实际上,它是一个索引值,指向内核为每一个进程所维护的该进程打开文件的记录表。当程序打开一个现有文件或者创建一个新文件时,内核向进程返回一个文件描述符。在程序设计中,一些涉及底层的程序编写往往会围绕着文件描述符展开。但是文件描述符这一概念往往只适用于UNIX、Linux这样的操作系统。
1.5、缓存I/O
缓存 I/O 又被称作标准 I/O,大多数文件系统的默认 I/O 操作都是缓存 I/O。在 Linux 的缓存 I/O 机制中,操作系统会将 I/O 的数据缓存在文件系统的页缓存( page cache )中,也就是说,数据会先被拷贝到操作系统内核的缓冲区中,然后才会从操作系统内核的缓冲区拷贝到应用程序的地址空间。
缓存 I/O 的缺点:
数据在传输过程中需要在应用程序地址空间和内核进行多次数据拷贝操作,这些数据拷贝操作所带来的CPU 以及内存开销是非常大的。
这两种技术都可以很好的降低系统开销,都被广泛应用很多大型系统,如websphere、tomcat和各种数据库等。但是,“线程池”和“连接池”技术也只是在一定程度上缓解了频繁调用IO接口带来的资源占用。而且,所谓“池”始终有其上限,当请求大大超过上限时,“池”构成的系统对外界的响应并不比没有池的时候效果好多少。所以使用“池”必须考虑其面临的响应规模,并根据响应规模调整“池”的大小。
对应上例中的所面临的可能同时出现的上千甚至上万次的客户端请求,“线程池”或“连接池”或许可以缓解部分压力,但是不能解决所有问题。总之,多线程模型可以方便高效的解决小规模的服务请求,但面对大规模的服务请求,多线程模型也会遇到瓶颈,可以用非阻塞接口来尝试解决这个问题
实例4:
#_*_coding:utf-8_*_
__author__ = 'Alex Li'
import select
import socket
import sys
import queue
# Create a TCP/IP socket
server = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
server.setblocking(False)
# Bind the socket to the port
server_address = ('localhost', 10000)
print(sys.stderr, 'starting up on %s port %s' % server_address)
server.bind(server_address)
# Listen for incoming connections
server.listen(5)
# Sockets from which we expect to read
inputs = [ server ]
# Sockets to which we expect to write
outputs = [ ]
message_queues = {}
while inputs:
# Wait for at least one of the sockets to be ready for processing
print( 'nwaiting for the next event')
readable, writable, exceptional = select.select(inputs, outputs, inputs)
# Handle inputs
for s in readable:
if s is server:
# A "readable" server socket is ready to accept a connection
connection, client_address = s.accept()
print('new connection from', client_address)
connection.setblocking(False)
inputs.append(connection)
# Give the connection a queue for data we want to send
message_queues[connection] = queue.Queue()
else:
data = s.recv(1024)
if data:
# A readable client socket has data
print(sys.stderr, 'received "%s" from %s' % (data, s.getpeername()) )
message_queues[s].put(data)
# Add output channel for response
if s not in outputs:
outputs.append(s)
else:
# Interpret empty result as closed connection
print('closing', client_address, 'after reading no data')
# Stop listening for input on the connection
if s in outputs:
outputs.remove(s) #既然客户端都断开了,我就不用再给它返回数据了,所以这时候如果这个客户端的连接对象还在outputs列表中,就把它删掉
inputs.remove(s) #inputs中也删除掉
s.close() #把这个连接关闭掉
# Remove message queue
del message_queues[s]
# Handle outputs
for s in writable:
try:
next_msg = message_queues[s].get_nowait()
except queue.Empty:
# No messages waiting so stop checking for writability.
print('output queue for', s.getpeername(), 'is empty')
outputs.remove(s)
else:
print( 'sending "%s" to %s' % (next_msg, s.getpeername()))
s.send(next_msg)
# Handle "exceptional conditions"
for s in exceptional:
print('handling exceptional condition for', s.getpeername() )
# Stop listening for input on the connection
inputs.remove(s)
if s in outputs:
outputs.remove(s)
s.close()
# Remove message queue
del message_queues[s]
实例5:
# select 模拟一个socket server,注意socket必须在非阻塞情况下才能实现IO多路复用。
# 接下来通过例子了解select 是如何通过单进程实现同时处理多个非阻塞的socket连接的。
#server端
import select
import socket
import queue
server = socket.socket()
server.bind(('localhost',9000))
server.listen(1000)
server.setblocking(False) # 设置成非阻塞模式,accept和recv都非阻塞
# 这里如果直接 server.accept() ,如果没有连接会报错,所以有数据才调他们
# BlockIOError:[WinError 10035] 无法立即完成一个非阻塞性套接字操作。
msg_dic = {}
inputs = [server,] # 交给内核、select检测的列表。
# 必须有一个值,让select检测,否则报错提供无效参数。
# 没有其他连接之前,自己就是个socket,自己就是个连接,检测自己。活动了说明有链接
outputs = [] # 你往里面放什么,下一次就出来了
while True:
readable, writeable, exceptional = select.select(inputs, outputs, inputs) # 定义检测
#新来连接 检测列表 异常(断开)
# 异常的也是inputs是: 检测那些连接的存在异常
print(readable,writeable,exceptional)
for r in readable:
if r is server: # 有数据,代表来了一个新连接
conn, addr = server.accept()
print("来了个新连接",addr)
inputs.append(conn) # 把连接加到检测列表里,如果这个连接活动了,就说明数据来了
# inputs = [server.conn] # 【conn】只返回活动的连接,但怎么确定是谁活动了
# 如果server活动,则来了新连接,conn活动则来数据
msg_dic[conn] = queue.Queue() # 初始化一个队列,后面存要返回给这个客户端的数据
else:
try :
data = r.recv(1024) # 注意这里是r,而不是conn,多个连接的情况
print("收到数据",data)
# r.send(data) # 不能直接发,如果客户端不收,数据就没了
msg_dic[r].put(data) # 往里面放数据
outputs.append(r) # 放入返回的连接队列里
except ConnectionResetError as e:
print("客户端断开了",r)
if r in outputs:
outputs.remove(r) #清理已断开的连接
inputs.remove(r) #清理已断开的连接
del msg_dic[r] ##清理已断开的连接
for w in writeable: # 要返回给客户端的连接列表
data_to_client = msg_dic[w].get() # 在字典里取数据
w.send(data_to_client) # 返回给客户端
outputs.remove(w) # 删除这个数据,确保下次循环的时候不返回这个已经处理完的连接了。
for e in exceptional: # 如果连接断开,删除连接相关数据
if e in outputs:
outputs.remove(e)
inputs.remove(e)
del msg_dic[e]
#*************************client
import socket
client = socket.socket()
client.connect(('localhost', 9000))
while True:
cmd = input('>>> ').strip()
if len(cmd) == 0 : continue
client.send(cmd.encode('utf-8'))
data = client.recv(1024)
print(data.decode())
client.close()
实例6:
import selectors
import socket
sel = selectors.DefaultSelector()
def accept(sock, mask):
conn, addr = sock.accept() # Should be ready
print('accepted', conn, 'from', addr)
conn.setblocking(False)
sel.register(conn, selectors.EVENT_READ, read)
def read(conn, mask):
data = conn.recv(1000) # Should be ready
if data:
print('echoing', repr(data), 'to', conn)
conn.send(data) # Hope it won't block
else:
print('closing', conn)
sel.unregister(conn)
conn.close()
sock = socket.socket()
sock.bind(('localhost', 1234))
sock.listen(100)
sock.setblocking(False)
sel.register(sock, selectors.EVENT_READ, accept)
while True:
events = sel.select()
for key, mask in events:
callback = key.data
callback(key.fileobj, mask)
注:本文最重要的参考文献是Richard Stevens的“UNIX® Network ProgrammingVolume 1, Third Edition: The Sockets Networking ” t_redirect
在linux中,默认情况下所有的socket都是blocking,一个典型的读操作流程大概是这样:
当用户进程调用了recvfrom这个系统调用,kernel(内核)就开始了IO的第一个阶段:准备数据(对于网络IO来说,很多时候数据在一开始还没有到达。比如,还没有收到一个完整的UDP包。这个时候kernel就要等待足够的数据到来)。这个过程需要等待,也就是说数据被拷贝到操作系统内核的缓冲区中是需要一个过程的。而在用户进程这边,整个进程会被阻塞(当然,是进程自己选择的阻塞)。当kernel一直等到数据准备好了,它就会将数据从kernel中拷贝到用户内存,然后kernel返回结果,用户进程才解除block的状态,重新运行起来。
所以,blocking IO的特点就是在IO执行的两个阶段都被block了。
事件驱动之鼠标点击事件
通常,我们写服务器处理模型的程序时,有以下几种模型:
(1)每收到一个请求,创建一个新的进程,来处理该请求;
(2)每收到一个请求,创建一个新的线程,来处理该请求;
(3)每收到一个请求,放入一个事件列表,让主进程通过非阻塞I/O方式来处理请求
第三种就是协程、事件驱动的方式,一般普遍认为第(3)种方式是大多数网络服务器采用的方式
论事件驱动模型
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<title>Title</title>
</head>
<body>
<p onclick="fun()">点我呀</p>
<script type="text/javascript">
function fun() {
alert('约吗?')
}
</script>
</body>
</html>
在UI编程中,常常要对鼠标点击进行相应,首先如何获得鼠标点击呢? 两种方式:
对于一次IO访问(以read举例),数据会先被拷贝到操作系统内核的缓冲区中,然后才会从操作系统内核的缓冲区拷贝到应用程序的地址空间。所以说,当一个read操作发生时,它会经历两个阶段:
1. 等待数据准备 (Waiting for the data to be ready)
2. 将数据从内核拷贝到进程中 (Copying the data from the kernel tothe process)
正式因为这两个阶段,linux系统产生了下面五种网络模式的方案。
注:由于signal driven IO在实际中并不常用,所以只提及剩下的四种IO Model。
协程:遇到IO操作就切换。
但什么时候切回去呢?怎么确定IO操作完了?
IO multiplexing这个词可能有点陌生,但是如果我说select,epoll,大概就都能明白了。有些地方也称这种IO方式为event driven IO。我们都知道,select/epoll的好处就在于单个process就可以同时处理多个网络连接的IO。它的基本原理就是select/epoll这个function会不断的轮询所负责的所有socket,当某个socket有数据到达了,就通知用户进程。它的流程如图:
当用户进程调用了select,那么整个进程会被block,而同时,kernel会“监视”所有select负责的socket,当任何一个socket中的数据准备好了,select就会返回。这个时候用户进程再调用read操作,将数据从kernel拷贝到用户进程。
这个图和blockingIO的图其实并没有太大的不同,事实上,还更差一些。因为这里需要使用两个systemcall (select 和 recvfrom),而blocking IO只调用了一个system call(recvfrom)。但是,用select的优势在于它可以同时处理多个connection。(多说一句。所以,如果处理的连接数不是很高的话,使用select/epoll的webserver不一定比使用multi-threading blocking IO的webserver性能更好,可能延迟还更大。select/epoll的优势并不是对于单个连接能处理得更快,而是在于能处理更多的连接。)
在IO multiplexingModel中,实际中,对于每一个socket,一般都设置成为non-blocking,但是,如上图所示,整个用户的process其实是一直被block的。只不过process是被select这个函数block,而不是被socketIO给block。
注意1:select函数返回结果中如果有文件可读了,那么进程就可以通过调用accept()或recv()来让kernel将位于内核中准备到的数据copy到用户区。
注意2: select的优势在于可以处理多个连接,不适用于单个连接
每一个代码块里是完成各种各样事情的代码,但编程者知道代码块A,B,C,D...的执行顺序,唯一能够改变这个流程的是数据。输入不同的数据,根据条件语句判断,流程或许就改为A--->C--->E...--->结束。每一次程序运行顺序或许都不同,但它的控制流程是由输入数据和你编写的程序决定的。如果你知道这个程序当前的运行状态(包括输入数据和程序本身),那你就知道接下来甚至一直到结束它的运行流程。
linux下的asynchronous IO其实用得很少。先看一下它的流程:
用户进程发起read操作之后,立刻就可以开始去做其它的事。而另一方面,从kernel的角度,当它受到一个asynchronous read之后,首先它会立刻返回,所以不会对用户进程产生任何block。然后,kernel会等待数据准备完成,然后将数据拷贝到用户内存,当这一切都完成之后,kernel会给用户进程发送一个signal,告诉它read操作完成了。
到目前为止,已经将四个IOModel都介绍完了。现在回过头来回答最初的那几个问题:blocking和non-blocking的区别在哪,synchronousIO和asynchronous IO的区别在哪。
先回答最简单的这个:blocking vsnon-blocking。前面的介绍中其实已经很明确的说明了这两者的区别。调用blockingIO会一直block住对应的进程直到操作完成,而non-blockingIO在kernel还准备数据的情况下会立刻返回。
在说明synchronous IO和asynchronousIO的区别之前,需要先给出两者的定义。Stevens给出的定义(其实是POSIX的定义)是这样子的:
A synchronous I/O operation causes the requesting process to beblocked until that I/O operationcompletes; An asynchronous I/O operation does not cause the requesting processto be blocked;
注意:由于咱们接下来要讲的select,poll,epoll都属于IO多路复用,而IO多路复用又属于同步的范畴,故,epoll只是一个伪异步而已。
各个IO Model的比较如图所示:
经过上面的介绍,会发现non-blocking IO和asynchronous IO的区别还是很明显的。在non-blocking IO中,虽然进程大部分时间都不会被block,但是它仍然要求进程去主动的check,并且当数据准备完成以后,也需要进程主动的再次调用recvfrom来将数据拷贝到用户内存。而asynchronous IO则完全不同。它就像是用户进程将整个IO操作交给了他人(kernel)完成,然后他人做完后发信号通知。在此期间,用户进程不需要去检查IO操作的状态,也不需要主动的去拷贝数据。
五种IO模型比较:
Linux下的asynchronous IO其实用得很少。先看一下它的流程:
用户进程发起read操作之后,立刻就可以开始去做其它的事。而另一方面,从kernel的角度,当它受到一个asynchronous read之后,首先它会立刻返回,所以不会对用户进程产生任何block。然后,kernel(内核)会等待数据准备完成,然后将数据拷贝到用户内存,当这一切都完成之后,kernel会给用户进程发送一个signal,告诉它read操作完成了。
一、事件驱动模型介绍
现在操作系统都是采用虚拟存储器,那么对32位操作系统而言,它的寻址空间(虚拟存储空间)为4G(2的32次方)。
操作系统的核心是内核,独立于普通的应用程序,可以访问受保护的内存空间,也有访问底层硬件设备的所有权限。
为了保证用户进程不能直接操作内核(kernel),保证内核的安全,操心系统将虚拟空间划分为两部分,一部分为内核空间,一部分为用户空间。
针对linux操作系统而言,将最高的1G字节(从虚拟地址0xC0000000到0xFFFFFFFF),供内核使用,称为内核空间,而将较低的3G字节(从虚拟地址0x00000000到0xBFFFFFFF),供各个进程使用,称为用户空间。
通常,我们写服务器处理模型的程序时,有以下几种模型:
(1)每收到一个请求,创建一个新的进程,来处理该请求;
(2)每收到一个请求,创建一个新的线程,来处理该请求;
(3)每收到一个请求,放入一个事件列表,让主进程通过非阻塞I/O方式来处理请求
上面的几种方式,各有千秋,
第(1)中方法,由于创建新的进程的开销比较大,所以,会导致服务器性能比较差,但实现比较简单。
第(2)种方式,由于要涉及到线程的同步,有可能会面临死锁等问题。
第(3)种方式,在写应用程序代码时,逻辑比前面两种都复杂。
综合考虑各方面因素,一般普遍认为第(3)种方式是大多数网络服务器采用的方式
引子
那么这个方式有以下几个缺点:
IO multiplexing就是我们说的select,poll,epoll,有些地方也称这种IO方式为event driven IO。select/epoll的好处就在于单个process就可以同时处理多个网络连接的IO。它的基本原理就是select,poll,epoll这个function会不断的轮询所负责的所有socket,当某个socket有数据到达了,就通知用户进程。
当用户进程调用了select,那么整个进程会被block,而同时,kernel会“监视”所有select负责的socket,当任何一个socket中的数据准备好了,select就会返回。这个时候用户进程再调用read操作,将数据从kernel(内核)拷贝到用户进程。
这个图和blocking IO的图其实并没有太大的不同,事实上,还更差一些。因为这里需要使用两个system call (select 和 recvfrom),而blocking IO只调用了一个system call (recvfrom)。但是,用select的优势在于它可以同时处理多个connection。
所以,如果处理的连接数不是很高的话,使用select/epoll的web server不一定比使用multi-threading blocking IO的web server性能更好,可能延迟还更大。select/epoll的优势并不是对于单个连接能处理得更快,而是在于能处理更多的连接。)
在IO multiplexing Model中,实际中,对于每一个socket,一般都设置成为non-blocking,但是,如上图所示,整个用户的process其实是一直被block的。只不过process是被select这个函数block,而不是被socket IO给block。
所以,I/O 多路复用的特点是通过一种机制一个进程能同时等待多个文件描述符,而这些文件描述符(套接字描述符)其中的任意一个进入读就绪状态,select()函数就可以返回。
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<title>Title</title>
</head>
<body>
<p onclick="fun()">点我呀</p>
<script type="text/javascript">
function fun() {
alert('约吗?')
}
</script>
</body>
</html>
事件驱动模型之鼠标点击事件
文件描述符(Filedescriptor)是计算机科学中的一个术语,是一个用于表述指向文件的引用的抽象化概念。
文件描述符在形式上是一个非负整数。实际上,它是一个索引值,指向内核为每一个进程所维护的该进程打开文件的记录表。当程序打开一个现有文件或者创建一个新文件时,内核向进程返回一个文件描述符。在程序设计中,一些涉及底层的程序编写往往会围绕着文件描述符展开。但是文件描述符这一概念往往只适用于UNIX、Linux这样的操作系统。
目前大部分的UI编程都是事件驱动模型,如很多UI平台都会提供onClick()事件,这个事件就代表鼠标按下事件。事件驱动模型大体思路如下:
上述了解了下事件驱动模型,那么什么是事件驱动模型
事件驱动编程是一种编程范式,这里程序的执行流由外部事件来决定。它的特点是包含一个事件循环,当外部事件发生时使用回调机制来触发相应的处理。另外两种常见的编程范式是(单线程)同步以及多线程编程。
最初的问题:怎么确定IO操作完了切回去呢?通过回调函数
1.要理解事件驱动和程序,就需要与非事件驱动的程序进行比较。实际上,现代的程序大多是事件驱动的,比如多线程的程序,肯定是事件驱动的。早期则存在许多非事件驱动的程序,这样的程序,在需要等待某个条件触发时,会不断地检查这个条件,直到条件满足,这是很浪费cpu时间的。而事件驱动的程序,则有机会释放cpu从而进入睡眠态(注意是有机会,当然程序也可自行决定不释放cpu),当事件触发时被操作系统唤醒,这样就能更加有效地使用cpu.2.再说什么是事件驱动的程序。一个典型的事件驱动的程序,就是一个死循环,并以一个线程的形式存在,这个死循环包括两个部分,第一个部分是按照一定的条件接收并选择一个要处理的事件,第二个部分就是事件的处理过程。程序的执行过程就是选择事件和处理事件,而当没有任何事件触发时,程序会因查询事件队列失败而进入睡眠状态,从而释放cpu。3.事件驱动的程序,必定会直接或者间接拥有一个事件队列,用于存储未能及时处理的事件。4.事件驱动的程序的行为,完全受外部输入的事件控制,所以,事件驱动的系统中,存在大量这种程序,并以事件作为主要的通信方式。5.事件驱动的程序,还有一个最大的好处,就是可以按照一定的顺序处理队列中的事件,而这个顺序则是由事件的触发顺序决定的,这一特性往往被用于保证某些过程的原子化。6.目前windows,linux,nucleus,vxworks都是事件驱动的,只有一些单片机可能是非事件驱动的
注意,事件驱动的监听事件是由操作系统调用的cpu来完成的
二、IO模型
用协程实现的IO阻塞自动切换,那么协程又是怎么实现的,在原理是是怎么实现的。如何去实现事件驱动的情况下IO的自动阻塞的切换,这个学名叫什么呢?=> IO多路复用
比如socketserver,多个客户端连接,单线程下实现并发效果,就叫多路复用。
IO模型又划分为: 阻塞IO、非阻塞IO、同步IO、异步IO 它们是如何定义的,之间的区别是什么?
解释之前,声明一些概念:
用户空间和内核空间
现在操作系统都是采用虚拟存储器,那么对32位操作系统而言,它的寻址空间(虚拟存储空间)为4G(2的32次方)。
操作系统的核心是内核,独立于普通的应用程序,可以访问受保护的内存空间,也有访问底层硬件设备的所有权限。
为了保证用户进程不能直接操作内核(kernel),保证内核的安全,操心系统将虚拟空间划分为两部分,一部分为内核空间,一部分为用户空间。
针对linux操作系统而言,将最高的1G字节(从虚拟地址0xC0000000到0xFFFFFFFF),供内核使用,称为内核空间,而将较低的3G字节(从虚拟地址0x00000000到0xBFFFFFFF),供各个进程使用,称为用户空间。
进程切换
为了控制进程的执行,内核必须有能力挂起正在CPU上运行的进程,并恢复以前挂起的某个进程的执行。这种行为被称为进程切换,这种切换是由操作系统来完成的。因此可以说,任何进程都是在操作系统内核的支持下运行的,是与内核紧密相关的。
从一个进程的运行转到另一个进程上运行,这个过程中经过下面这些变化:
保存处理机上下文,包括程序计数器和其他寄存器。
更新PCB信息。
把进程的PCB移入相应的队列,如就绪、在某事件阻塞等队列。
选择另一个进程执行,并更新其PCB。
更新内存管理的数据结构。
恢复处理机上下文。
注:总而言之就是很耗资源的
进程的阻塞
正在执行的进程,由于期待的某些事件未发生,如请求系统资源失败、等待某种操作的完成、新数据尚未到达或无新工作做等,则由系统自动执行阻塞原语(Block),使自己由运行状态变为阻塞状态。可见,进程的阻塞是进程自身的一种主动行为,也因此只有处于运行态的进程(获得CPU),才可能将其转为阻塞状态。当进程进入阻塞状态,是不占用CPU资源的。
文件描述符
文件描述符(Filedescriptor)是计算机科学中的一个术语,是一个用于表述指向文件的引用的抽象化概念。
文件描述符在形式上是一个非负整数。实际上,它是一个索引值,指向内核为每一个进程所维护的该进程打开文件的记录表。当程序打开一个现有文件或者创建一个新文件时,内核向进程返回一个文件描述符。在程序设计中,一些涉及底层的程序编写往往会围绕着文件描述符展开。但是文件描述符这一概念往往只适用于UNIX、Linux这样的操作系统。
缓存I/O
缓存 I/O 又被称作标准 I/O,大多数文件系统的默认 I/O 操作都是缓存 I/O。在 Linux 的缓存 I/O 机制中,操作系统会将 I/O 的数据缓存在文件系统的页缓存( page cache )中,也就是说,数据会先被拷贝到操作系统内核的缓冲区中,然后才会从操作系统内核的缓冲区拷贝到应用程序的地址空间。用户空间没法直接访问内核空间的,内核态到用户态的数据拷贝
思考:为什么数据一定要先到内核区,直接到用户内存不是更直接吗?
缓存 I/O 的缺点:
数据在传输过程中需要在应用程序地址空间和内核进行多次数据拷贝操作,这些数据拷贝操作所带来的 CPU 以及内存开销是非常大的。
同步(synchronous) IO和异步(asynchronous) IO,阻塞(blocking) IO和非阻塞(non-blocking)IO分别是什么,到底有什么区别?这个问题其实不同的人给出的答案都可能不同
由于signal driven IO(信号驱动IO模型)在实际中并不常用,所以只提及剩下的四种IO Model。
再说一下IO发生时涉及的对象和步骤。
对于一个network IO(这里我们以read举例),它会涉及到两个系统对象,一个是调用这个IO的process(orthread),另一个就是系统内核(kernel)。当一个read操作发生时,它会经历两个阶段:
1 等待数据准备 (Waiting for the data to beready)
2 将数据从内核拷贝到进程中 (Copying thedata from the kernel to the process)
记住这两点很重要,因为这些IO Model的区别就是在两个阶段上各有不同的情况。
实例1(non-blocking IO):
import time
import socket
sk = socket.socket(socket.AF_INET,socket.SOCK_STREAM)
sk.setsockopt
sk.bind(('127.0.0.1',6667))
sk.listen(5)
sk.setblocking(False)
while True:
try:
print ('waiting client connection .......')
connection,address = sk.accept() # 进程主动轮询
print(" ",address)
client_messge = connection.recv(1024)
print(str(client_messge,'utf8'))
connection.close()
except Exception as e:
print (e)
time.sleep(4)
#############################client
import time
import socket
sk = socket.socket(socket.AF_INET,socket.SOCK_STREAM)
while True:
sk.connect(('127.0.0.1',6667))
print("hello")
sk.sendall(bytes("hello","utf8"))
time.sleep(2)
break
优点:能够在等待任务完成的时间里干其他活了(包括提交其他任务,也就是 “后台” 可以有多个任务在同时执行)。
缺点:任务完成的响应延迟增大了,因为每过一段时间才去轮询一次read操作,而任务可能在两次轮询之间的任意时间完成。这会导致整体数据吞吐量的降低。
实例2(IO multiplexing):
在非阻塞实例中,轮询的主语是进程,而“后台” 可能有多个任务在同时进行,人们就想到了循环查询多个任务的完成状态,只要有任何一个任务完成,就去处理它。不过,这个监听的重任通过调用select等函数交给了内核去做。IO多路复用有两个特别的系统调用select、poll、epoll函数。select调用是内核级别的,select轮询相对非阻塞的轮询的区别在于—前者可以等待多个socket,能实现同时对多个IO端口进行监听,当其中任何一个socket的数据准好了,就能返回进行可读,然后进程再进行recvfrom系统调用,将数据由内核拷贝到用户进程,当然这个过程是阻塞的。
实例2:
import socket
import select
sk=socket.socket()
sk.bind(("127.0.0.1",9904))
sk.listen(5)
while True:
r,w,e=select.select([sk,],[],[],5)
for i in r:
# conn,add=i.accept()
#print(conn)
print("hello")
print('>>>>>>')
#*************************client.py
import socket
sk=socket.socket()
sk.connect(("127.0.0.1",9904))
while 1:
inp=input(">>").strip()
sk.send(inp.encode("utf8"))
data=sk.recv(1024)
print(data.decode("utf8"))
请思考:为什么不调用accept,会反复print?
select属于水平触发
实例3(server端并发聊天):
#***********************server.py
import socket
import select
sk=socket.socket()
sk.bind(("127.0.0.1",8801))
sk.listen(5)
inputs=[sk,]
while True:
r,w,e=select.select(inputs,[],[],5)
print(len(r))
for obj in r:
if obj==sk:
conn,add=obj.accept()
print(conn)
inputs.append(conn)
else:
data_byte=obj.recv(1024)
print(str(data_byte,'utf8'))
inp=input('回答%s号客户>>>'%inputs.index(obj))
obj.sendall(bytes(inp,'utf8'))
print('>>',r)
#***********************client.py
import socket
sk=socket.socket()
sk.connect(('127.0.0.1',8801))
while True:
inp=input(">>>>")
sk.sendall(bytes(inp,"utf8"))
data=sk.recv(1024)
print(str(data,'utf8'))
文件描述符其实就是咱们平时说的句柄,只不过文件描述符是linux中的概念。注意,我们的accept或recv调用时即向系统发出recvfrom请求
(1) 如果内核缓冲区没有数据--->等待--->数据到了内核缓冲区,转到用户进程缓冲区;
(2) 如果先用select监听到某个文件描述符对应的内核缓冲区有了数据,当我们再调用accept或recv时,直接将数据转到用户缓冲区。
思考1:开启5个client,分别按54321的顺序发送消息,那么server端是按什么顺序回消息的呢?
思考2: 如何在某一个client端退出后,不影响server端和其它客户端正常交流
linux:
if not data_byte:
inputs.remove(obj)
continue
win
try:
data_byte=obj.recv(1024)
print(str(data_byte,'utf8'))
inp=input('回答%s号客户>>>'%inputs.index(obj))
obj.sendall(bytes(inp,'utf8'))
except Exception:
inputs.remove(obj)
对于事件驱动型程序模型,它的流程大致如下:
首先列一下,sellect、poll、epoll三者的区别
poll
它和select在本质上没有多大差别,但是poll没有最大文件描述符数量的限制。
一般也不用它,相当于过渡阶段
epoll
直到Linux2.6才出现了由内核直接支持的实现方法,那就是epoll。被公认为Linux2.6下性能最好的多路I/O就绪通知方法。windows不支持
没有最大文件描述符数量的限制。
比如100个连接,有两个活跃了,epoll会告诉用户这两个两个活跃了,直接取就ok了,而select是循环一遍。
(了解)epoll可以同时支持水平触发和边缘触发(EdgeTriggered,只告诉进程哪些文件描述符刚刚变为就绪状态,它只说一遍,如果我们没有采取行动,那么它将不会再次告知,这种方式称为边缘触发),理论上边缘触发的性能要更高一些,但是代码实现相当复杂。
另一个本质的改进在于epoll采用基于事件的就绪通知方式。在select/poll中,进程只有在调用一定的方法后,内核才对所有监视的文件描述符进行扫描,而epoll事先通过epoll_ctl()来注册一个文件描述符,一旦基于某个文件描述符就绪时,内核会采用类似callback的回调机制,迅速激活这个文件描述符,当进程调用epoll_wait()时便得到通知。
所以市面上上见到的所谓的异步IO,比如nginx、Tornado、等,我们叫它异步IO,实际上是IO多路复用。
select与epoll
# 首先我们来定义流的概念,一个流可以是文件,socket,pipe等等可以进行I/O操作的内核对象。
# 不管是文件,还是套接字,还是管道,我们都可以把他们看作流。
# 之后我们来讨论I/O的操作,通过read,我们可以从流中读入数据;通过write,我们可以往流写入数据。现在假
# 定一个情形,我们需要从流中读数据,但是流中还没有数据,(典型的例子为,客户端要从socket读如数据,但是
# 服务器还没有把数据传回来),这时候该怎么办?
# 阻塞。阻塞是个什么概念呢?比如某个时候你在等快递,但是你不知道快递什么时候过来,而且你没有别的事可以干
# (或者说接下来的事要等快递来了才能做);那么你可以去睡觉了,因为你知道快递把货送来时一定会给你打个电话
# (假定一定能叫醒你)。
# 非阻塞忙轮询。接着上面等快递的例子,如果用忙轮询的方法,那么你需要知道快递员的手机号,然后每分钟给他挂
# 个电话:“你到了没?”
# 很明显一般人不会用第二种做法,不仅显很无脑,浪费话费不说,还占用了快递员大量的时间。
# 大部分程序也不会用第二种做法,因为第一种方法经济而简单,经济是指消耗很少的CPU时间,如果线程睡眠了,
# 就掉出了系统的调度队列,暂时不会去瓜分CPU宝贵的时间片了。
#
# 为了了解阻塞是如何进行的,我们来讨论缓冲区,以及内核缓冲区,最终把I/O事件解释清楚。缓冲区的引入是为
# 了减少频繁I/O操作而引起频繁的系统调用(你知道它很慢的),当你操作一个流时,更多的是以缓冲区为单位进
# 行操作,这是相对于用户空间而言。对于内核来说,也需要缓冲区。
# 假设有一个管道,进程A为管道的写入方,B为管道的读出方。
# 假设一开始内核缓冲区是空的,B作为读出方,被阻塞着。然后首先A往管道写入,这时候内核缓冲区由空的状态变
# 到非空状态,内核就会产生一个事件告诉B该醒来了,这个事件姑且称之为“缓冲区非空”。
# 但是“缓冲区非空”事件通知B后,B却还没有读出数据;且内核许诺了不能把写入管道中的数据丢掉这个时候,A写
# 入的数据会滞留在内核缓冲区中,如果内核也缓冲区满了,B仍未开始读数据,最终内核缓冲区会被填满,这个时候
# 会产生一个I/O事件,告诉进程A,你该等等(阻塞)了,我们把这个事件定义为“缓冲区满”。
# 假设后来B终于开始读数据了,于是内核的缓冲区空了出来,这时候内核会告诉A,内核缓冲区有空位了,你可以从
# 长眠中醒来了,继续写数据了,我们把这个事件叫做“缓冲区非满”
# 也许事件Y1已经通知了A,但是A也没有数据写入了,而B继续读出数据,知道内核缓冲区空了。这个时候内核就告
# 诉B,你需要阻塞了!,我们把这个时间定为“缓冲区空”。
# 这四个情形涵盖了四个I/O事件,缓冲区满,缓冲区空,缓冲区非空,缓冲区非满(注都是说的内核缓冲区,且这四
# 个术语都是我生造的,仅为解释其原理而造)。这四个I/O事件是进行阻塞同步的根本。(如果不能理解“同步”是
# 什么概念,请学习操作系统的锁,信号量,条件变量等任务同步方面的相关知识)。
#
# 然后我们来说说阻塞I/O的缺点。但是阻塞I/O模式下,一个线程只能处理一个流的I/O事件。如果想要同时处理多
# 个流,要么多进程(fork),要么多线程(pthread_create),很不幸这两种方法效率都不高。
# 于是再来考虑非阻塞忙轮询的I/O方式,我们发现我们可以同时处理多个流了(把一个流从阻塞模式切换到非阻塞
# 模式再此不予讨论):
# while true {
# for i in stream[]; {
# if i has data
# read until unavailable
# }
# }
# 我们只要不停的把所有流从头到尾问一遍,又从头开始。这样就可以处理多个流了,但这样的做法显然不好,因为
# 如果所有的流都没有数据,那么只会白白浪费CPU。这里要补充一点,阻塞模式下,内核对于I/O事件的处理是阻
# 塞或者唤醒,而非阻塞模式下则把I/O事件交给其他对象(后文介绍的select以及epoll)处理甚至直接忽略。
#
# 为了避免CPU空转,可以引进了一个代理(一开始有一位叫做select的代理,后来又有一位叫做poll的代理,不
# 过两者的本质是一样的)。这个代理比较厉害,可以同时观察许多流的I/O事件,在空闲的时候,会把当前线程阻
# 塞掉,当有一个或多个流有I/O事件时,就从阻塞态中醒来,于是我们的程序就会轮询一遍所有的流(于是我们可
# 以把“忙”字去掉了)。代码长这样:
# while true {
# select(streams[])
# for i in streams[] {
# if i has data
# read until unavailable
# }
# }
# 于是,如果没有I/O事件产生,我们的程序就会阻塞在select处。但是依然有个问题,我们从select那里仅仅知
# 道了,有I/O事件发生了,但却并不知道是那几个流(可能有一个,多个,甚至全部),我们只能无差别轮询所有流,
# 找出能读出数据,或者写入数据的流,对他们进行操作。
# 但是使用select,我们有O(n)的无差别轮询复杂度,同时处理的流越多,每一次无差别轮询时间就越长。再次
# 说了这么多,终于能好好解释epoll了
# epoll可以理解为event poll,不同于忙轮询和无差别轮询,epoll之会把哪个流发生了怎样的I/O事件通知我
# 们。此时我们对这些流的操作都是有意义的。
# 在讨论epoll的实现细节之前,先把epoll的相关操作列出:
# epoll_create 创建一个epoll对象,一般epollfd = epoll_create()
# epoll_ctl (epoll_add/epoll_del的合体),往epoll对象中增加/删除某一个流的某一个事件
# 比如
# epoll_ctl(epollfd, EPOLL_CTL_ADD, socket, EPOLLIN);//有缓冲区内有数据时epoll_wait返回
# epoll_ctl(epollfd, EPOLL_CTL_DEL, socket, EPOLLOUT);//缓冲区可写入时epoll_wait返回
# epoll_wait(epollfd,...)等待直到注册的事件发生
# (注:当对一个非阻塞流的读写发生缓冲区满或缓冲区空,write/read会返回-1,并设置errno=EAGAIN。
# 而epoll只关心缓冲区非满和缓冲区非空事件)。
# 一个epoll模式的代码大概的样子是:
# while true {
# active_stream[] = epoll_wait(epollfd)
# for i in active_stream[] {
# read or write till unavailable
# }
# }
# 举个例子:
# select:
# 班里三十个同学在考试,谁先做完想交卷都要通过按钮来活动,他按按钮作为老师的我桌子上的灯就会变红.
# 一旦灯变红,我(select)我就可以知道有人交卷了,但是我并不知道谁交的,所以,我必须跟个傻子似的轮询
# 地去问:嘿,是你要交卷吗?然后我就可以以这种效率极低地方式找到要交卷的学生,然后把它的卷子收上来.
#
#
# epoll:
# 这次再有人按按钮,我这不光灯会亮,上面还会显示要交卷学生的名字.这样我就可以直接去对应学生那收卷就
# 好了.当然,同时可以有多人交卷.
IO多路复用的触发方式
# 在linux的IO多路复用中有水平触发,边缘触发两种模式,这两种模式的区别如下:
#
# 水平触发:如果文件描述符已经就绪可以非阻塞的执行IO操作了,此时会触发通知.允许在任意时刻重复检测IO的状态,
# 没有必要每次描述符就绪后尽可能多的执行IO.select,poll就属于水平触发.
#
# 边缘触发:如果文件描述符自上次状态改变后有新的IO活动到来,此时会触发通知.在收到一个IO事件通知后要尽可能
# 多的执行IO操作,因为如果在一次通知中没有执行完IO那么就需要等到下一次新的IO活动到来才能获取到就绪的描述
# 符.信号驱动式IO就属于边缘触发.
#
# epoll既可以采用水平触发,也可以采用边缘触发.
#
# 大家可能还不能完全了解这两种模式的区别,我们可以举例说明:一个管道收到了1kb的数据,epoll会立即返回,此时
# 读了512字节数据,然后再次调用epoll.这时如果是水平触发的,epoll会立即返回,因为有数据准备好了.如果是边
# 缘触发的不会立即返回,因为此时虽然有数据可读但是已经触发了一次通知,在这次通知到现在还没有新的数据到来,
# 直到有新的数据到来epoll才会返回,此时老的数据和新的数据都可以读取到(当然是需要这次你尽可能的多读取).
# 下面我们还从电子的角度来解释一下:
#
# 水平触发:也就是只有高电平(1)或低电平(0)时才触发通知,只要在这两种状态就能得到通知.上面提到的只要
# 有数据可读(描述符就绪)那么水平触发的epoll就立即返回.
#
# 边缘触发:只有电平发生变化(高电平到低电平,或者低电平到高电平)的时候才触发通知.上面提到即使有数据
# 可读,但是没有新的IO活动到来,epoll也不会立即返回.
事件驱动模型
正在执行的进程,由于期待的某些事件未发生,如请求系统资源失败、等待某种操作的完成、新数据尚未到达或无新工作做等,则由系统自动执行阻塞原语(Block),使自己由运行状态变为阻塞状态。可见,进程的阻塞是进程自身的一种主动行为,也因此只有处于运行态的进程(获得CPU),才可能将其转为阻塞状态。当进程进入阻塞状态,是不占用CPU资源的。
线性模式
在进行解释之前,首先要说明几个概念:
开始--->初始化--->等待
前面是用协程实现的IO阻塞自动切换,那么协程又是怎么实现的,在原理是是怎么实现的。如何去实现事件驱动的情况下IO的自动阻塞的切换,这个学名叫什么呢?=> IO多路复用
比如socketserver,多个客户端连接,单线程下实现并发效果,就叫多路复用。
同步IO和异步IO,阻塞IO和非阻塞IO分别是什么,到底有什么区别?不同的人在不同的上下文下给出的答案是不同的。所以先限定一下本文的上下文。
本文讨论的背景是Linux环境下的network IO。
IO multiplexing这个词可能有点陌生,但是如果我说select,epoll,大概就都能明白了。有些地方也称这种IO方式为event driven IO。我们都知道,select/epoll的好处就在于单个process就可以同时处理多个网络连接的IO。它的基本原理就是select/epoll这个function会不断的轮询所负责的所有socket,当某个socket有数据到达了,就通知用户进程。它的流程如图:
当用户进程调用了select,那么整个进程会被block,而同时,kernel会“监视”所有select负责的socket,当任何一个socket中的数据准备好了,select就会返回。这个时候用户进程再调用read操作,将数据从kernel拷贝到用户进程。
这个图和blockingIO的图其实并没有太大的不同,事实上,还更差一些。因为这里需要使用两个systemcall (select 和 recvfrom),而blocking IO只调用了一个system call(recvfrom)。但是,用select的优势在于它可以同时处理多个connection。(多说一句。所以,如果处理的连接数不是很高的话,使用select/epoll的webserver不一定比使用multi-threading blocking IO的webserver性能更好,可能延迟还更大。select/epoll的优势并不是对于单个连接能处理得更快,而是在于能处理更多的连接。)
在IO multiplexingModel中,实际中,对于每一个socket,一般都设置成为non-blocking,但是,如上图所示,整个用户的process其实是一直被block的。只不过process是被select这个函数block,而不是被socketIO给block。
注意1:select函数返回结果中如果有文件可读了,那么进程就可以通过调用accept()或recv()来让kernel将位于内核中准备到的数据copy到用户区。
注意2: select的优势在于可以处理多个连接,不适用于单个连接
linux下,可以通过设置socket使其变为non-blocking。当对一个non-blocking socket执行读操作时,流程是这个样子:
从图中可以看出,当用户进程发出read操作时,如果kernel中的数据还没有准备好,那么它并不会block用户进程,而是立刻返回一个error。从用户进程角度讲,它发起一个read操作后,并不需要等待,而是马上就得到了一个结果。用户进程判断结果是一个error时,它就知道数据还没有准备好,于是它可以再次发送read操作。一旦kernel中的数据准备好了,并且又再次收到了用户进程的systemcall,那么它马上就将数据拷贝到了用户内存,然后返回。
所以,用户进程其实是需要不断的主动询问kernel数据好了没有。
注意:
在网络IO时候,非阻塞IO也会进行recvform系统调用,检查数据是否准备好,与阻塞IO不一样,”非阻塞将大的整片时间的阻塞分成N多的小的阻塞, 所以进程不断地有机会 ‘被’ CPU光顾”。即每次recvform系统调用之间,cpu的权限还在进程手中,这段时间是可以做其他事情的,
也就是说非阻塞的recvform系统调用调用之后,进程并没有被阻塞,内核马上返回给进程,如果数据还没准备好,此时会返回一个error。进程在返回之后,可以干点别的事情,然后再发起recvform系统调用。重复上面的过程,循环往复的进行recvform系统调用。这个过程通常被称之为轮询。轮询检查内核数据,直到数据准备好,再拷贝数据到进程,进行数据处理。需要注意,拷贝数据整个过程,进程仍然是属于阻塞的状态。
诸多诸多
开始--->代码块A--->代码块B--->代码块C--->代码块D--->......--->结束
传统的编程是如下线性模式的:
linux下,可以通过设置socket使其变为non-blocking。当对一个non-blocking socket执行读操作时,流程是这个样子:
从图中可以看出,当用户进程发出read操作时,如果kernel中的数据还没有准备好,那么它并不会block用户进程,而是立刻返回一个error。从用户进程角度讲,它发起一个read操作后,并不需要等待,而是马上就得到了一个结果。用户进程判断结果是一个error时,它就知道数据还没有准备好,于是它可以再次发送read操作。一旦kernel中的数据准备好了,并且又再次收到了用户进程的systemcall,那么它马上就将数据拷贝到了用户内存,然后返回。
所以,用户进程其实是需要不断的主动询问kernel数据好了没有。
注意:
在网络IO时候,非阻塞IO也会进行recvform系统调用,检查数据是否准备好,与阻塞IO不一样,”非阻塞将大的整片时间的阻塞分成N多的小的阻塞, 所以进程不断地有机会 ‘被’ CPU光顾”。即每次recvform系统调用之间,cpu的权限还在进程手中,这段时间是可以做其他事情的。
也就是说非阻塞的recvform系统调用调用之后,进程并没有被阻塞,内核马上返回给进程,如果数据还没准备好,此时会返回一个error。进程在返回之后,可以干点别的事情,然后再发起recvform系统调用。重复上面的过程,循环往复的进行recvform系统调用。这个过程通常被称之为轮询。轮询检查内核数据,直到数据准备好,再拷贝数据到进程,进行数据处理。需要注意,拷贝数据整个过程,进程仍然是属于阻塞的状态
第三种就是协程、事件驱动的方式,一般普遍认为第(3)种方式是大多数网络服务器采用的方式
在linux中,默认情况下所有的socket都是blocking,一个典型的读操作流程大概是这样:
当用户进程调用了recvfrom这个系统调用,kernel就开始了IO的第一个阶段:准备数据。对于networkio来说,很多时候数据在一开始还没有到达(比如,还没有收到一个完整的UDP包),这个时候kernel就要等待足够的数据到来。而在用户进程这边,整个进程会被阻塞。当kernel一直等到数据准备好了,它就会将数据从kernel中拷贝到用户内存,然后kernel返回结果,用户进程才解除block的状态,重新运行起来。
所以,blocking IO的特点就是在IO执行的两个阶段都被block了。
linux下的asynchronous IO其实用得很少。先看一下它的流程:
用户进程发起read操作之后,立刻就可以开始去做其它的事。而另一方面,从kernel的角度,当它受到一个asynchronous read之后,首先它会立刻返回,所以不会对用户进程产生任何block。然后,kernel会等待数据准备完成,然后将数据拷贝到用户内存,当这一切都完成之后,kernel会给用户进程发送一个signal,告诉它read操作完成了。
四种IO模型都做了一番简单的介绍
回顾上方问题区别 调用blocking IO会一直block住对应的进程直到操作完成,而non-blocking IO在kernel还准备数据的情况下会立刻返回。
异步IO是一点阻塞都没有的模型,而同步IO则带有阻塞
各个IO Model的比较如图所示:
经过上面的介绍,会发现non-blocking IO和asynchronous IO的区别还是很明显的。在non-blocking IO中,虽然进程大部分时间都不会被block,但是它仍然要求进程去主动的check,并且当数据准备完成以后,也需要进程主动的再次调用recvfrom来将数据拷贝到用户内存。而asynchronous IO则完全不同。它就像是用户进程将整个IO操作交给了他人(kernel)完成,然后他人做完后发信号通知。在此期间,用户进程不需要去检查IO操作的状态,也不需要主动的去拷贝数据。
五种IO模型比较:
在此,上述对不同的IO模型进行了阐释和区分,但是只是对它们有了一些概念性的理解,想要让它们融会贯通,还需要再今后的实践中再次加深理解
本章主题是IO多路复用,那么现在我们进入到章节的内容
三、select poll epoll IO多路复用介绍
首先列一下,sellect、poll、epoll三者的区别
select
select最早于1983年出现在4.2BSD中,它通过一个select()系统调用来监视多个文件描述符的数组,当select()返回后,该数组中就绪的文件描述符便会被内核修改标志位,使得进程可以获得这些文件描述符从而进行后续的读写操作。
select目前几乎在所有的平台上支持
select的一个缺点在于单个进程能够监视的文件描述符的数量存在最大限制,在Linux上一般为1024,不过可以通过修改宏定义甚至重新编译内核的方式提升这一限制。
另外,select()所维护的存储大量文件描述符的数据结构,随着文件描述符数量的增大,其复制的开销也线性增长。同时,由于网络响应时间的延迟使得大量TCP连接处于非活跃状态,但调用select()会对所有socket进行一次线性扫描,所以这也浪费了一定的开销。
poll
它和select在本质上没有多大差别,但是poll没有最大文件描述符数量的限制。
一般也不用它,相当于过渡阶段
epoll
直到Linux2.6才出现了由内核直接支持的实现方法,那就是epoll。被公认为Linux2.6下性能最好的多路I/O就绪通知方法。windows不支持
没有最大文件描述符数量的限制。
比如100个连接,有两个活跃了,epoll会告诉用户这两个两个活跃了,直接取就ok了,而select是循环一遍。
(了解)epoll可以同时支持水平触发和边缘触发(Edge Triggered,只告诉进程哪些文件描述符刚刚变为就绪状态,它只说一遍,如果我们没有采取行动,那么它将不会再次告知,这种方式称为边缘触发),理论上边缘触发的性能要更高一些,但是代码实现相当复杂。
另一个本质的改进在于epoll采用基于事件的就绪通知方式。在select/poll中,进程只有在调用一定的方法后,内核才对所有监视的文件描述符进行扫描,而epoll事先通过epoll_ctl()来注册一个文件描述符,一旦基于某个文件描述符就绪时,内核会采用类似callback的回调机制,迅速激活这个文件描述符,当进程调用epoll_wait()时便得到通知。
所以市面上上见到的所谓的异步IO,比如nginx、Tornado、等,我们叫它异步IO,实际上是IO多路复用。
select与epoll
# 首先我们来定义流的概念,一个流可以是文件,socket,pipe等等可以进行I/O操作的内核对象。
# 不管是文件,还是套接字,还是管道,我们都可以把他们看作流。
# 之后我们来讨论I/O的操作,通过read,我们可以从流中读入数据;通过write,我们可以往流写入数据。现在假
# 定一个情形,我们需要从流中读数据,但是流中还没有数据,(典型的例子为,客户端要从socket读如数据,但是
# 服务器还没有把数据传回来),这时候该怎么办?
# 阻塞。阻塞是个什么概念呢?比如某个时候你在等快递,但是你不知道快递什么时候过来,而且你没有别的事可以干
# (或者说接下来的事要等快递来了才能做);那么你可以去睡觉了,因为你知道快递把货送来时一定会给你打个电话
# (假定一定能叫醒你)。
# 非阻塞忙轮询。接着上面等快递的例子,如果用忙轮询的方法,那么你需要知道快递员的手机号,然后每分钟给他挂
# 个电话:“你到了没?”
# 很明显一般人不会用第二种做法,不仅显很无脑,浪费话费不说,还占用了快递员大量的时间。
# 大部分程序也不会用第二种做法,因为第一种方法经济而简单,经济是指消耗很少的CPU时间,如果线程睡眠了,
# 就掉出了系统的调度队列,暂时不会去瓜分CPU宝贵的时间片了。
#
# 为了了解阻塞是如何进行的,我们来讨论缓冲区,以及内核缓冲区,最终把I/O事件解释清楚。缓冲区的引入是为
# 了减少频繁I/O操作而引起频繁的系统调用(你知道它很慢的),当你操作一个流时,更多的是以缓冲区为单位进
# 行操作,这是相对于用户空间而言。对于内核来说,也需要缓冲区。
# 假设有一个管道,进程A为管道的写入方,B为管道的读出方。
# 假设一开始内核缓冲区是空的,B作为读出方,被阻塞着。然后首先A往管道写入,这时候内核缓冲区由空的状态变
# 到非空状态,内核就会产生一个事件告诉B该醒来了,这个事件姑且称之为“缓冲区非空”。
# 但是“缓冲区非空”事件通知B后,B却还没有读出数据;且内核许诺了不能把写入管道中的数据丢掉这个时候,A写
# 入的数据会滞留在内核缓冲区中,如果内核也缓冲区满了,B仍未开始读数据,最终内核缓冲区会被填满,这个时候
# 会产生一个I/O事件,告诉进程A,你该等等(阻塞)了,我们把这个事件定义为“缓冲区满”。
# 假设后来B终于开始读数据了,于是内核的缓冲区空了出来,这时候内核会告诉A,内核缓冲区有空位了,你可以从
# 长眠中醒来了,继续写数据了,我们把这个事件叫做“缓冲区非满”
# 也许事件Y1已经通知了A,但是A也没有数据写入了,而B继续读出数据,知道内核缓冲区空了。这个时候内核就告
# 诉B,你需要阻塞了!,我们把这个时间定为“缓冲区空”。
# 这四个情形涵盖了四个I/O事件,缓冲区满,缓冲区空,缓冲区非空,缓冲区非满(注都是说的内核缓冲区,且这四
# 个术语都是我生造的,仅为解释其原理而造)。这四个I/O事件是进行阻塞同步的根本。(如果不能理解“同步”是
# 什么概念,请学习操作系统的锁,信号量,条件变量等任务同步方面的相关知识)。
#
# 然后我们来说说阻塞I/O的缺点。但是阻塞I/O模式下,一个线程只能处理一个流的I/O事件。如果想要同时处理多
# 个流,要么多进程(fork),要么多线程(pthread_create),很不幸这两种方法效率都不高。
# 于是再来考虑非阻塞忙轮询的I/O方式,我们发现我们可以同时处理多个流了(把一个流从阻塞模式切换到非阻塞
# 模式再此不予讨论):
# while true {
# for i in stream[]; {
# if i has data
# read until unavailable
# }
# }
# 我们只要不停的把所有流从头到尾问一遍,又从头开始。这样就可以处理多个流了,但这样的做法显然不好,因为
# 如果所有的流都没有数据,那么只会白白浪费CPU。这里要补充一点,阻塞模式下,内核对于I/O事件的处理是阻
# 塞或者唤醒,而非阻塞模式下则把I/O事件交给其他对象(后文介绍的select以及epoll)处理甚至直接忽略。
#
# 为了避免CPU空转,可以引进了一个代理(一开始有一位叫做select的代理,后来又有一位叫做poll的代理,不
# 过两者的本质是一样的)。这个代理比较厉害,可以同时观察许多流的I/O事件,在空闲的时候,会把当前线程阻
# 塞掉,当有一个或多个流有I/O事件时,就从阻塞态中醒来,于是我们的程序就会轮询一遍所有的流(于是我们可
# 以把“忙”字去掉了)。代码长这样:
# while true {
# select(streams[])
# for i in streams[] {
# if i has data
# read until unavailable
# }
# }
# 于是,如果没有I/O事件产生,我们的程序就会阻塞在select处。但是依然有个问题,我们从select那里仅仅知
# 道了,有I/O事件发生了,但却并不知道是那几个流(可能有一个,多个,甚至全部),我们只能无差别轮询所有流,
# 找出能读出数据,或者写入数据的流,对他们进行操作。
# 但是使用select,我们有O(n)的无差别轮询复杂度,同时处理的流越多,每一次无差别轮询时间就越长。再次
# 说了这么多,终于能好好解释epoll了
# epoll可以理解为event poll,不同于忙轮询和无差别轮询,epoll之会把哪个流发生了怎样的I/O事件通知我
# 们。此时我们对这些流的操作都是有意义的。
# 在讨论epoll的实现细节之前,先把epoll的相关操作列出:
# epoll_create 创建一个epoll对象,一般epollfd = epoll_create()
# epoll_ctl (epoll_add/epoll_del的合体),往epoll对象中增加/删除某一个流的某一个事件
# 比如
# epoll_ctl(epollfd, EPOLL_CTL_ADD, socket, EPOLLIN);//有缓冲区内有数据时epoll_wait返回
# epoll_ctl(epollfd, EPOLL_CTL_DEL, socket, EPOLLOUT);//缓冲区可写入时epoll_wait返回
# epoll_wait(epollfd,...)等待直到注册的事件发生
# (注:当对一个非阻塞流的读写发生缓冲区满或缓冲区空,write/read会返回-1,并设置errno=EAGAIN。
# 而epoll只关心缓冲区非满和缓冲区非空事件)。
# 一个epoll模式的代码大概的样子是:
# while true {
# active_stream[] = epoll_wait(epollfd)
# for i in active_stream[] {
# read or write till unavailable
# }
# }
# 举个例子:
# select:
# 班里三十个同学在考试,谁先做完想交卷都要通过按钮来活动,他按按钮作为老师的我桌子上的灯就会变红.
# 一旦灯变红,我(select)我就可以知道有人交卷了,但是我并不知道谁交的,所以,我必须跟个傻子似的轮询
# 地去问:嘿,是你要交卷吗?然后我就可以以这种效率极低地方式找到要交卷的学生,然后把它的卷子收上来.
#
#
# epoll:
# 这次再有人按按钮,我这不光灯会亮,上面还会显示要交卷学生的名字.这样我就可以直接去对应学生那收卷就
# 好了.当然,同时可以有多人交卷.
select与epoll
IO多路复用触发方式
# 在linux的IO多路复用中有水平触发,边缘触发两种模式,这两种模式的区别如下:
#
# 水平触发:如果文件描述符已经就绪可以非阻塞的执行IO操作了,此时会触发通知.允许在任意时刻重复检测IO的状态,
# 没有必要每次描述符就绪后尽可能多的执行IO.select,poll就属于水平触发.
#
# 边缘触发:如果文件描述符自上次状态改变后有新的IO活动到来,此时会触发通知.在收到一个IO事件通知后要尽可能
# 多的执行IO操作,因为如果在一次通知中没有执行完IO那么就需要等到下一次新的IO活动到来才能获取到就绪的描述
# 符.信号驱动式IO就属于边缘触发.
#
# epoll既可以采用水平触发,也可以采用边缘触发.
#
# 大家可能还不能完全了解这两种模式的区别,我们可以举例说明:一个管道收到了1kb的数据,epoll会立即返回,此时
# 读了512字节数据,然后再次调用epoll.这时如果是水平触发的,epoll会立即返回,因为有数据准备好了.如果是边
# 缘触发的不会立即返回,因为此时虽然有数据可读但是已经触发了一次通知,在这次通知到现在还没有新的数据到来,
# 直到有新的数据到来epoll才会返回,此时老的数据和新的数据都可以读取到(当然是需要这次你尽可能的多读取).
# 下面我们还从电子的角度来解释一下:
#
# 水平触发:也就是只有高电平(1)或低电平(0)时才触发通知,只要在这两种状态就能得到通知.上面提到的只要
# 有数据可读(描述符就绪)那么水平触发的epoll就立即返回.
#
# 边缘触发:只有电平发生变化(高电平到低电平,或者低电平到高电平)的时候才触发通知.上面提到即使有数据
# 可读,但是没有新的IO活动到来,epoll也不会立即返回.
水平触发和边缘触发
IO多度复用触发方式
Py IO model彩世界注册网站。IO multiplexing(多路复用IO):
在非阻塞实例中,轮询的主语是进程,而“后台” 可能有多个任务在同时进行,人们就想到了循环查询多个任务的完成状态,只要有任何一个任务完成,就去处理它。不过,这个监听的重任通过调用select等函数交给了内核去做。IO多路复用有两个特别的系统调用select、poll、epoll函数。select调用是内核级别的,select轮询相对非阻塞的轮询的区别在于—前者可以等待多个socket,能实现同时对多个IO端口进行监听,当其中任何一个socket的数据准好了,就能返回进行可读,然后进程再进行recvfrom系统调用,将数据由内核拷贝到用户进程,当然这个过程是阻塞的
import socket
import select
# IO多路复用实现并发
sk = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
inp = [sk, ]
sk.bind(('127.0.0.1', 8080))
sk.listen(5)
while True:
r, w, e = select.select(inp, [], [], ) #监听inp是否发生变化
for obj in r:
if obj == sk: #如果有新客户端连接那么sk将发生变化
conn, addr = obj.accept() #如果客户端socket对象发生改变那么conn将发生变化
print(conn)
inp.append(conn)
else:
data = obj.recv(1024)
print(data.decode('utf-8'))
msg = input('回答%s号客户'%inp.index(obj))
obj.send(bytes(msg,'utf-8'))
#--------------------------------------------------------------------
import socket
sk = socket.socket(socket.AF_INET,socket.SOCK_STREAM)
sk.connect(('127.0.0.1',8080))
while True:
msg = input('>>>')
if not msg:continue
sk.send(msg.encode('utf-8'))
data = sk.recv(1024)
print(data.decode('utf-8'))
IO多路复用实现并发
文件描述符其实就是咱们平时说的句柄,只不过文件描述符是linux中的概念。注意,我们的accept或recv调用时即向系统发出recvfrom请求
(1) 如果内核缓冲区没有数据--->等待--->数据到了内核缓冲区,转到用户进程缓冲区;
(2) 如果先用select监听到某个文件描述符对应的内核缓冲区有了数据,当我们再调用accept或recv时,直接将数据转到用户缓冲区。
思考1:开启5个client,分别按54321的顺序发送消息,那么server端是按什么顺序回消息的呢?
答: ......
思考2: 如何在某一个client端退出后,不影响server端和其它客户端正常交流
答: 某客户端退出之后,设置一个异常处理,捕获这个客户端退出的异常,并删除select监听的conn
# linux:
if not data_byte:
inputs.remove(obj)
continue
# windows
try:
data_byte=obj.recv(1024)
print(str(data_byte,'utf8'))
inp=input('回答%s号客户>>>'%inputs.index(obj))
obj.sendall(bytes(inp,'utf8'))
except Exception:
inputs.remove(obj)
View Code
四、异步IO
import selectors
from socket import *
def read(conn, mask):
try:
data = conn.recv(1024)
if not data:raise Exception
print(data.decode('utf-8'))
conn.send(data.upper())
except Exception as e:
print(e)
sel.unregister(conn)
conn.close()
def accept(sk, mask):
conn, addr = sk.accept()
print(conn)
sel.register(conn, selectors.EVENT_READ, read)
sk = socket(AF_INET, SOCK_STREAM)
sk.bind(('127.0.0.1', 8080))
sk.listen(5)
sk.setblocking(False)
sel = selectors.DefaultSelector() #创建一个selectors的对象sel,自动选择select,epoll,...
sel.register(sk, selectors.EVENT_READ, accept) #进行注册,将sk绑定accept
while True:
events = sel.select() #开始监听
for key, mask in events:
callback = key.data # key.data是一个函数 =accpet
callback(key.fileobj, mask) # key.fileobj=sk
异步IO
五、阐释一下IO编程
IO在计算机中指Input/Output,也就是输入和输出。由于程序和运行时数据是在内存中驻留,由CPU这个超快的计算核心来执行,涉及到数据交换的地方,通常是磁盘、网络等,就需要IO接口。
比如你打开浏览器,访问新浪首页,浏览器这个程序就需要通过网络IO获取新浪的网页。浏览器首先会发送数据给新浪服务器,告诉它我想要首页的HTML,这个动作是往外发数据,叫Output,随后新浪服务器把网页发过来,这个动作是从外面接收数据,叫Input。所以,通常,程序完成IO操作会有Input和Output两个数据流。当然也有只用一个的情况,比如,从磁盘读取文件到内存,就只有Input操作,反过来,把数据写到磁盘文件里,就只是一个Output操作。
IO编程中,Stream(流)是一个很重要的概念,可以把流想象成一个水管,数据就是水管里的水,但是只能单向流动。Input Stream就是数据从外面(磁盘、网络)流进内存,Output Stream就是数据从内存流到外面去。对于浏览网页来说,浏览器和新浪服务器之间至少需要建立两根水管,才可以既能发数据,又能收数据。
由于CPU和内存的速度远远高于外设的速度,所以,在IO编程中,就存在速度严重不匹配的问题。举个例子来说,比如要把100M的数据写入磁盘,CPU输出100M的数据只需要0.01秒,可是磁盘要接收这100M数据可能需要10秒,怎么办呢?有两种办法:
第一种是CPU等着,也就是程序暂停执行后续代码,等100M的数据在10秒后写入磁盘,再接着往下执行,这种模式称为同步IO;
另一种方法是CPU不等待,只是告诉磁盘,“您老慢慢写,不着急,我接着干别的事去了”,于是,后续代码可以立刻接着执行,这种模式称为异步IO。
同步和异步的区别就在于是否等待IO执行的结果。好比你去麦当劳点餐,你说“来个汉堡”,服务员告诉你,对不起,汉堡要现做,需要等5分钟,于是你站在收银台前面等了5分钟,拿到汉堡再去逛商场,这是同步IO。
你说“来个汉堡”,服务员告诉你,汉堡需要等5分钟,你可以先去逛商场,等做好了,我们再通知你,这样你可以立刻去干别的事情(逛商场),这是异步IO。
很明显,使用异步IO来编写程序性能会远远高于同步IO,但是异步IO的缺点是编程模型复杂。想想看,你得知道什么时候通知你“汉堡做好了”,而通知你的方法也各不相同。如果是服务员跑过来找到你,这是回调模式,如果服务员发短信通知你,你就得不停地检查手机,这是轮询模式。总之,异步IO的复杂度远远高于同步IO。
操作IO的能力都是由操作系统提供的,每一种编程语言都会把操作系统提供的低级C接口封装起来方便使用,Python也不例外。
IO编程都是同步模式,异步IO由于复杂度太高。
那么这个方式有以下几个缺点:
与上面传统编程模式不同,事件驱动程序在启动之后,就在那等待,等待什么呢?等待被事件触发。传统编程下也有“等待”的时候,比如在代码块D中,你定义了一个input(),需要用户输入数据。但这与下面的等待不同,传统编程的“等待”,比如input(),你作为程序编写者是知道或者强制用户输入某个东西的,或许是数字,或许是文件名称,如果用户输入错误,你还需要提醒他,并请他重新输入。事件驱动程序的等待则是完全不知道,也不强制用户输入或者干什么。只要某一事件发生,那程序就会做出相应的“反应”。这些事件包括:输入信息、鼠标、敲击键盘上某个键还有系统内部定时器触发。
两种方式:
|
Ignore missing attributes for AST objects
$ python3.8
Python 3.8.5+ (default, Dec 12 2020, 16:21:57)
[GCC 9.3.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> cpython_ast = ast.Assign([ast.Name("test", ast.Store())], ast.Name("foo", ast.Load()))
>>> cpython_ast.type_comment
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
AttributeError: 'Assign' object has no attribute 'type_comment'
>>> cpython_ast.targets
[<_ast.Name object at 0x7f3f257ce250>]
>>> cpython_ast.value
<_ast.Name object at 0x7f3f257a0e20>
>>>> import ast; pypy_ast = ast.Assign([ast.Name("test", ast.Store())], ast.Name("foo", ast.Load()))
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: Assign constructor takes either 0 or 3 positional arguments
To upload designs, you'll need to enable LFS and have admin enable hashed storage. More information
|
Was hoping to get a bit of feedback on designing a rrule. I'm wondering what the best way to create a minutely rule is but omit entire days. So for example get 5 minute intervals every weekday between some time but omit certain holidays. I thought of using something like
INTERVAL = 5
BUCKETS = int(60 / INTERVAL * 24)
WEEKDAYS = (MO, TU, WE, TH, FR)
nyd = rrule(MONTHLY, dtstart=datetime(2019, 1, 1, 0, 0), bymonth=1, byminute=range(0, 59, INTERVAL), byhour=range(0,24), bysetpos=range(1, BUCKETS + 1), byweekday=WEEKDAYS, count=300)
which works, but I'm wondering if this is some non recommended hackery. The reason I ask is because bysetpos only supports values up until 366. For minutely and secondly data you could have situations where you want to use bysetpos up to 1440 and 86400 respectively, so I'm wondering if bysetpos is not intended for this use?
rruleset with an exrule.
INTERVAL = 5
BUCKETS = int(60 / INTERVAL * 24)
WEEKDAYS = (MO, TU, WE, TH, FR)
rrule_base = rrule(MINUTELY, dtstart=datetime(2019, 1, 1, 0, 0), count=300, byweekday=WEEKDAYS)
rrset = rruleset()
rrset.rrule(rrule_base)
for dt in holidays:
rrset.exrule(rrule_base.replace(dtstart=dt, until=(dt + timedelta(days=1)), count=None))
for dt in holidays:
for inst in rrule_base.between(dt, dt + timedelta(days=1)):
rrset.exdate(inst)
Following up on your comments about bysetpos and the RFC, I don't see any mention of restricting this. My guess is that it was implemented with DAILY frequency in mind? The docstrings also don't mention anything
If given, it must be either an integer, or a sequence of integers,
positive or negative. Each given integer will specify an occurrence
number, corresponding to the nth occurrence of the rule inside the
frequency period. For example, a bysetpos of -1 if combined with a
MONTHLY frequency, and a byweekday of (MO, TU, WE, TH, FR), will
result in the last work day of every month.
list(rrule(MINUTELY, dtstart=datetime(2019, 1, 1, 0, 0), bysetpos=367, count=3, byweekday=MO))
In addition in playing around with this I think there may be an issue more broadly with bysetpos when using MINUTELY. This runs fine
In [11]: list(rrule(MINUTELY, dtstart=datetime(2019, 1, 1, 0, 0), bysetpos=1, count=1, byweekday=MO))
Out[11]: [datetime.datetime(2019, 1, 7, 0, 0)]
but this seems to run indefinitely
list(rrule(MINUTELY, dtstart=datetime(2019, 1, 1, 0, 0), bysetpos=2, count=1, byweekday=MO))
rrule._iter yield statement is never reached? Or else the algorithm is just taking a very long time, still haven't fully grokked how _iter works
|
About Boolean Function
I want to find the Algebraic Normal Form of a boolean function.
For example:
sage: R.<x1,x2,x3> = BooleanPolynomialRing(3, order='degneglex')
sage: x3>x1
True
sage: L = BooleanFunction(x1+x2*x3)
sage: L.truth_table()
(False, True, False, True, False, True, True, False)
sage: L.algebraic_normal_form()
x0 + x1*x2
Unfortunately, Algebraic Normal Form of this specific boolean function is not correct (x0 should be x3, x1 should be x2 and x2 should be x1). The answer of the ANF() method is in order='lex' (not as I desired in order = 'degneglex'). Furthermore, the ANF() form is calculated out of this ring (x0 involved?)
How should I extract the ANF() in correct syntax (where x3>x2>x1)?
|
I have been trying to figure out how to generate a tar file of a directory of files. I have this code
tar = tarfile.open('/tmp/' + newDate + '.tar', 'w')
for fname in get_matching_s3_keys(bucket=agtBucket, prefix=key, suffix='.log'):
print(fname)
file_obj = s3object.Object(agtBucket, fname)
file_content = file_obj.get()['Body'].read()
tar.add(file_content)
tar.close()
But I get this error when I try to add file_content to tar
"errorMessage": "a bytes-like object is required, not 'str'"
I hope someone can please help me correct what I have wrong.
|
Is it intended behavior that hMailServer will not add the SpamAssassin score to its own score unless the SpamAssassin score is greater than or equal to the SpamAssassin spam threshold (i.e., SpamAssassin tags it as spam)? I've seen some discussions on this and it is eventually just dropped and the people usually just lower their SpamAssassin threshold score to make hMailServer always add the scores together (thus making SpamAssassin tag virtually everything as spam). It seems more logical to always add the scores together (or at least give us the option). Is this by design or is it a bug? Are there people who actually want it to work that way it currently is?
Thanks,
Chad
Thanks,
Chad
If SpamAssassin tags mail as spam, does that change the message?
(I don't use SpamAssassin)
I'm also guessing that the spam score may NOT be relayed to hMailserver by SpamAssassin unless the message is marked as SPAM
(I don't use SpamAssassin)
I'm also guessing that the spam score may NOT be relayed to hMailserver by SpamAssassin unless the message is marked as SPAM
Just 'cause I link to a page and say little else doesn't mean I am not being nice.
https://www.hmailserver.com/documentation
https://www.hmailserver.com/documentation
When SpamAssassin scores it above the configured threshold, it will add an X-Spam-Status: YES header (along with other informative headers). I downloaded the hMailServer source code and verified that it does not store the SpamAssassin score (and thus pass it back up to main spam handling routine) unless it finds the X-Spam-Status: YES header. Unless most people really like this behavior, I propose that it always count the score regardless of the X-Spam-Status value. In fact, I feel like that is the whole point of scoring....you keep testing and keep adding up scores until your ultimate threshold is reached. In my particular case, SpamAssassin gives a score of 4.9 (where 5.0 is the SpamAssassin threshold) and then hMailServer failed the SPF test which I score as 5. The total score should have been 9.9, but hMailServer just scored it as 5. My delete threshold is 9, so the mail should have been deleted but it wasn't.
So if you had set your SpamAssassin score at 1, the actual score value would have been passed, and the message rejected.
From what you are saying, the down side to setting the SpamAssassin mark score to 1 is that some mail that doesn't reach the hmailsevrer spam score will contain a spamAssassin header showing a SPamAssassin score?
Is that such a big deal?
From what you are saying, the down side to setting the SpamAssassin mark score to 1 is that some mail that doesn't reach the hmailsevrer spam score will contain a spamAssassin header showing a SPamAssassin score?
Is that such a big deal?
Just 'cause I link to a page and say little else doesn't mean I am not being nice.
https://www.hmailserver.com/documentation
https://www.hmailserver.com/documentation
In my opinion I cant see why you would want to add SA score unless SA has deemed it potential spam.
By using SA you are trusting it and iuts rules to make a judgement: as per your SA configuration, a mail is either deemed as Spam or it is deemed as safe.
You kow that SA rules add scores in decimal and as negatives such as
0.7
-0.2
0.5
1.3
-1.3
and as such it collectively determines its spam value. Furthermore it does this because the rules and its abilities are FAR MORE advanced than any HMS does internally.
By using the SA score if only appropriate when you think SA may have determined as spam (I have the threshold set at 3 which seems to be just right) and yet you want to not FULLY trust it and add HMS tests on to it too. ie, maybe SA scores 3.1 (determining as SPAM), and you have your own HMS threshold set to something higher (5 or 6) allowing for your own HMS tests. Of course this your own HMS test scoring is probably not as fine tuned as the SA scoring rules so is more brute force.
The alternative you are proposing is to say that even though SA thinks a mail is not spam (because its only scored 1.0), you are going to use this '1' and add it to you own HMS test; well what happens if the SA 'HELO' yest is the score of 1, and then you run HMS 'HELO' test scoring 4 (same test yet scored twice?) You now deem your mail as spam (acheiving 5) when in reality both SA and HMS hasnt REALLY found it as spam at all. Whereas using the existing method the mail, even though itis being tested twice for the same condition, still isnt deemed as spam (SA scores it 1.0, hms scores it 4 - and yet your HMS score threshold is 5).
My point is that it is right in my mind to perform the way it does (only when SA considers it as spam) when you have already decided to trust SA's decision making and judgement.
By using SA you are trusting it and iuts rules to make a judgement: as per your SA configuration, a mail is either deemed as Spam or it is deemed as safe.
You kow that SA rules add scores in decimal and as negatives such as
0.7
-0.2
0.5
1.3
-1.3
and as such it collectively determines its spam value. Furthermore it does this because the rules and its abilities are FAR MORE advanced than any HMS does internally.
By using the SA score if only appropriate when you think SA may have determined as spam (I have the threshold set at 3 which seems to be just right) and yet you want to not FULLY trust it and add HMS tests on to it too. ie, maybe SA scores 3.1 (determining as SPAM), and you have your own HMS threshold set to something higher (5 or 6) allowing for your own HMS tests. Of course this your own HMS test scoring is probably not as fine tuned as the SA scoring rules so is more brute force.
The alternative you are proposing is to say that even though SA thinks a mail is not spam (because its only scored 1.0), you are going to use this '1' and add it to you own HMS test; well what happens if the SA 'HELO' yest is the score of 1, and then you run HMS 'HELO' test scoring 4 (same test yet scored twice?) You now deem your mail as spam (acheiving 5) when in reality both SA and HMS hasnt REALLY found it as spam at all. Whereas using the existing method the mail, even though itis being tested twice for the same condition, still isnt deemed as spam (SA scores it 1.0, hms scores it 4 - and yet your HMS score threshold is 5).
My point is that it is right in my mind to perform the way it does (only when SA considers it as spam) when you have already decided to trust SA's decision making and judgement.
5.7 on test.
AV:
SpamassassinForWindows 3.4.0spamd service
AV:
Clamwin+Clamdservice +sanesecurity defs: https://www.hmailserver.com/forum/viewtopic.php?f=21&t=26829
Agreed.jimimaseye wrote:Furthermore it does this because the rules and its abilities are FAR MORE advanced than any HMS does internally.
Thinking about this, I'd expect that the SpamAssassin score was added irrespective of whether SpamAssassin marked the message as SPAM or not. (How else could the negative values be useful) That's certainly how the GUI looks.
I'd think that NOT doing that is a bug, and tha this should be added to the issue tracker at https://github.com/hmailserver/hmailserver/issues
Just 'cause I link to a page and say little else doesn't mean I am not being nice.
https://www.hmailserver.com/documentation
https://www.hmailserver.com/documentation
To give you an idea Matt, a typical Spamassassin header is added regardless and looks like thiswhere everything from "tests=" are the names of all rules applied and scored due to matching. The spam 'report' then lists them test individually with their scores.
Now, this is a good example: given this particular report scored overall 0.3, how would you have HMS take that score (as it only deals with integer scores)?
And here is another example:
The result of this is is MINUS 4.4 (-4.4). Now if you were to apply your own HMS rules in DNSBL or SURBL (that SA doesnt cover) and even score a match at 5 and 4 (total 9) it would still not hit a HMS threshold of 5 (which you may have set) - despite HMS actually scoring way and above this.
This explains why I belive you should only use SA scores when SA has determined it as spam by hitting ITS threshold.
Code: Select all
X-Spam-Status: No, score=0.3 required=3.0 tests=BAYES_00,
DYN_RDNS_AND_INLINE_IMAGE,HTML_MESSAGE,RDNS_DYNAMIC,SPF_PASS,
T_KAM_HTML_FONT_INVALID autolearn=no autolearn_force=no version=3.4.0
X-Spam-Report:
* -0.0 SPF_PASS SPF: sender matches SPF record
* 0.0 T_KAM_HTML_FONT_INVALID BODY: Test for Invalidly Named or Formatted
* Colors in HTML
* -1.9 BAYES_00 BODY: Bayes spam probability is 0 to 1%
* [score: 0.0000]
* 0.0 HTML_MESSAGE BODY: HTML included in message
* 1.0 RDNS_DYNAMIC Delivered to internal network by host with
* dynamic-looking rDNS
* 1.2 DYN_RDNS_AND_INLINE_IMAGE Contains image, and was sent by dynamic
* rDNS
*
Now, this is a good example: given this particular report scored overall 0.3, how would you have HMS take that score (as it only deals with integer scores)?
And here is another example:
Code: Select all
X-Spam-Status: No, score=-4.4 required=3.0 tests=BAYES_00,DKIM_SIGNED,
DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FROM,HTML_MESSAGE,KHOP_RCVD_TRUST,
RCVD_IN_DNSWL_LOW,RCVD_IN_HOSTKARMA_YE,RCVD_IN_MSPIKE_H3,RCVD_IN_MSPIKE_WL,
SPF_PASS,T_KAM_HTML_FONT_INVALID autolearn=ham autolearn_force=no
version=3.4.0
X-Spam-Report:
* 0.0 RCVD_IN_HOSTKARMA_YE RBL: HostKarma: relay in yellow list (varies)
* [209.85.212.177 listed in hostkarma.junkemailfilter.com]
* 0.0 FREEMAIL_FROM Sender email is commonly abused enduser mail provider
* (sandimy[at]gmail.com)
* -0.7 RCVD_IN_DNSWL_LOW RBL: Sender listed at http://www.dnswl.org/, low
* trust
* [209.85.212.177 listed in list.dnswl.org]
* -0.0 RCVD_IN_MSPIKE_H3 RBL: Good reputation (+3)
* [209.85.212.177 listed in wl.mailspike.net]
* -0.0 SPF_PASS SPF: sender matches SPF record
* 0.0 T_KAM_HTML_FONT_INVALID BODY: Test for Invalidly Named or Formatted
* Colors in HTML
* -1.9 BAYES_00 BODY: Bayes spam probability is 0 to 1%
* [score: 0.0000]
* 0.0 HTML_MESSAGE BODY: HTML included in message
* -0.1 DKIM_VALID_AU Message has a valid DKIM or DK signature from author's
* domain
* -0.1 DKIM_VALID Message has at least one valid DKIM or DK signature
* 0.1 DKIM_SIGNED Message has a DKIM or DK signature, not necessarily
* valid
* -0.0 RCVD_IN_MSPIKE_WL Mailspike good senders
* -1.8 KHOP_RCVD_TRUST DNS-Whitelisted sender is verified
*
This explains why I belive you should only use SA scores when SA has determined it as spam by hitting ITS threshold.
5.7 on test.
AV:
SpamassassinForWindows 3.4.0spamd service
AV:
Clamwin+Clamdservice +sanesecurity defs: https://www.hmailserver.com/forum/viewtopic.php?f=21&t=26829
Mine does... Hint: "URIBL"jimimaseye wrote:... or SURBL (that SA doesnt cover) ...
Code: Select all
X-Spam-Status: Yes, score=44.5 required=3.0 tests=BAYES_99,BAYES_999,
BODY_URI_ONLY,KAM_RBL,KAM_VERY_BLACK_DBL,MSGID_FROM_MTA_HEADER,
RAZOR2_CF_RANGE_51_100,RAZOR2_CF_RANGE_E8_51_100,RAZOR2_CHECK,
RCVD_IN_BL_SPAMCOP_NET,RCVD_IN_BRBL_LASTEXT,RCVD_IN_MSPIKE_BL,
RCVD_IN_MSPIKE_L5,RCVD_IN_PBL,RCVD_IN_PSBL,RCVD_IN_RP_RNBL,RCVD_IN_SORBS_WEB,
RCVD_IN_XBL,RCVD_NUMERIC_HELO,TVD_RCVD_IP,TVD_RCVD_IP4,T_FSL_HELO_BARE_IP_2,
URIBL_AB_SURBL,URIBL_BLACK,URIBL_DBL_SPAM,URIBL_JP_SURBL,URIBL_SBL,
URIBL_SBL_A,URIBL_SC_SURBL,URIBL_WS_SURBL autolearn=disabled version=3.4.0
X-Spam-Report:
* 0.6 URIBL_SC_SURBL Contains an URL listed in the SC SURBL blocklist
* [URIs: hotdrugsstore.in]
* 1.3 URIBL_JP_SURBL Contains an URL listed in the JP SURBL blocklist
* [URIs: hotdrugsstore.in]
* 4.5 URIBL_AB_SURBL Contains an URL listed in the AB SURBL blocklist
* [URIs: hotdrugsstore.in]
* 1.6 URIBL_WS_SURBL Contains an URL listed in the WS SURBL blocklist
* [URIs: hotdrugsstore.in]
* 3.3 RCVD_IN_PBL RBL: Received via a relay in Spamhaus PBL
* [109.135.11.38 listed in zen.spamhaus.org]
* 0.4 RCVD_IN_XBL RBL: Received via a relay in Spamhaus XBL
* 3.5 BAYES_99 BODY: Bayes spam probability is 99 to 100%
* [score: 1.0000]
* 0.0 TVD_RCVD_IP Message was received from an IP address
* 0.0 TVD_RCVD_IP4 Message was received from an IPv4 address
* 1.2 RCVD_NUMERIC_HELO Received: contains an IP address used for HELO
* 0.2 BAYES_999 BODY: Bayes spam probability is 99.9 to 100%
* [score: 1.0000]
* 1.9 RAZOR2_CF_RANGE_E8_51_100 Razor2 gives engine 8 confidence level
* above 50%
* [cf: 100] * 0.9 RAZOR2_CHECK Listed in Razor2 (http://razor.sf.net/)
* 0.5 RAZOR2_CF_RANGE_51_100 Razor2 gives confidence level above 50%
* [cf: 100]
* 2.5 URIBL_DBL_SPAM Contains a spam URL listed in the DBL blocklist
* [URIs: hotdrugsstore.in]
* 1.7 URIBL_BLACK Contains an URL listed in the URIBL blacklist
* [URIs: hotdrugsstore.in]
* 3.2 RCVD_IN_MSPIKE_L5 RBL: Very bad reputation (-5)
* [109.135.11.38 listed in bl.mailspike.net]
* 1.4 RCVD_IN_BRBL_LASTEXT RBL: No description available.
* [109.135.11.38 listed in bb.barracudacentral.org]
* 2.7 RCVD_IN_PSBL RBL: Received via a relay in PSBL
* [109.135.11.38 listed in psbl.surriel.com]
* 0.8 RCVD_IN_SORBS_WEB RBL: SORBS: sender is an abusable web server
* [109.135.11.38 listed in dnsbl.sorbs.net]
* 1.3 RCVD_IN_RP_RNBL RBL: Relay in RNBL,
* https://senderscore.org/blacklistlookup/
* [109.135.11.38 listed in bl.score.senderscore.com]
* 1.3 RCVD_IN_BL_SPAMCOP_NET RBL: Received via a relay in bl.spamcop.net
* [Blocked - see <http://www.spamcop.net/bl.shtml?109.135.11.38>]
* 0.1 URIBL_SBL_A Contains URL's A record listed in the SBL blocklist
* [URIs: meekly.hotdrugsstore.in]
* 1.6 URIBL_SBL Contains an URL's NS IP listed in the SBL blocklist
* [URIs: meekly.hotdrugsstore.in]
* 2.0 KAM_RBL Higher scores for hitting multiple trusted RBLs
* 0.0 RCVD_IN_MSPIKE_BL Mailspike blacklisted
* 5.0 KAM_VERY_BLACK_DBL Email that hits both URIBL Black and Spamhaus DBL
* 0.0 MSGID_FROM_MTA_HEADER Message-Id was added by a relay
* 0.0 T_FSL_HELO_BARE_IP_2 No description available.
* 1.0 BODY_URI_ONLY Message body is only a URI in one line of text or for
* an image
SørenR.
Word used by programmers when they do not want to explain what they did.
Algorithm(noun.)
Word used by programmers when they do not want to explain what they did.
I meant you could be adding your OWN lookups into HMS (and setting your own scores against positive matches for them) that Spamassassin has not been coded/rule defined to cover. (I know SA covers most of the main/popular ones such as multi.surbl.org etc., but there might be one the user has found that is not covered by SA rules that he chooses to add).SorenR wrote:Mine does... Hint: "URIBL"
5.7 on test.
AV:
SpamassassinForWindows 3.4.0spamd service
AV:
Clamwin+Clamdservice +sanesecurity defs: https://www.hmailserver.com/forum/viewtopic.php?f=21&t=26829
Ah.. My badjimimaseye wrote:I meant you could be adding your OWN lookups into HMS (and setting your own scores against positive matches for them) that Spamassassin has not been coded/rule defined to cover. (I know SA covers most of the main/popular ones such as multi.surbl.org etc., but there might be one the user has found that is not covered by SA rules that he chooses to add).SorenR wrote:Mine does... Hint: "URIBL"
SørenR.
Word used by programmers when they do not want to explain what they did.
Algorithm(noun.)
Word used by programmers when they do not want to explain what they did.
SpamAssassin can be configured to use other URIBL and DNSBL that aren't provide out of the box. I do, in fact, do this. In my configuration, I was assuming that spam scores were all additive and so I disable the DNS and SURBL options in hMailServer so that they would not contribute to double scoring. I have also spent time slowly studying the types of spam we receive and carefully tuning SpamAssassin to my exact needs. I'm somewhat new to hMailServer...coming from using other commercial products (SecurityGateway, SpamTitan, ORF, etc). All of the other commercial products having a continuous running spam score and I have found this to be very logical and effective. I let hMailServer continue to do SPF, HELO command, and sender DNS-MX checks because it is better suited to do so. I feel like those checks combined with the SpamAssassin checks provide very accurate spam checks. In fact, the majority of spams that get through to my system are the ones where SpamAssassin scores below 5 and hMailServer ignores the score (but would have been spam if it didn't because it failed an hMailServer test).
I also feel like hMailServer should change to all floating point scoring like some of the other commercial solutions so that there wouldn't be any integer truncation when adding everything together.
jimimaseye, your example that scored negative is a bit biased. Part of the negative contribution was because the mail is in some trusted whitelist databases. In my experience, you don't find the same IP's and domains both in blacklists AND whitelists....so, while not impossible, it is statistically unlikely the any DNSBL or SURBL hits would have fired for your example.
I also feel like hMailServer should change to all floating point scoring like some of the other commercial solutions so that there wouldn't be any integer truncation when adding everything together.
jimimaseye, your example that scored negative is a bit biased. Part of the negative contribution was because the mail is in some trusted whitelist databases. In my experience, you don't find the same IP's and domains both in blacklists AND whitelists....so, while not impossible, it is statistically unlikely the any DNSBL or SURBL hits would have fired for your example.
For me, the functionality seems logical to me (as I tried to explain above). The spamassassin score can be 'used', or simply the spam=yes/no and your own score applied.
I suspect it was intended to use EITHER spamassassin scores ONLY thereby leaving all decision making up to SA and not adding to by your own rules, OR you use SA conclusion ("spam=yes") and score it yourself along with having your own HMS testing (HELO, SPF, DNSBL etc). It seems that using SA score and then adding HMS test score to it isnt what it was intended for. After all, why would you ask SA to test for something, then do exactly the same test again in HMS (effectively doubling up the scoring probability) which is a scenario that is VERY possible (we have already identified the Spamassassin does the SPF, HELO, mainstream DNSBL and SURBL tests anyway - so why ask HMS to double check and double the points if you are choosing to accept SA scoring?)
Ideally, you would simply 'USE SPAMASSASSIN SCORING=YES', set your threshold as such (as you do in SA) and leave all HMS scoring and testing turned off (with exception of you having a specific surbl/dnsbl test that you KNOW your SA rules are not covering).
I suspect it was intended to use EITHER spamassassin scores ONLY thereby leaving all decision making up to SA and not adding to by your own rules, OR you use SA conclusion ("spam=yes") and score it yourself along with having your own HMS testing (HELO, SPF, DNSBL etc). It seems that using SA score and then adding HMS test score to it isnt what it was intended for. After all, why would you ask SA to test for something, then do exactly the same test again in HMS (effectively doubling up the scoring probability) which is a scenario that is VERY possible (we have already identified the Spamassassin does the SPF, HELO, mainstream DNSBL and SURBL tests anyway - so why ask HMS to double check and double the points if you are choosing to accept SA scoring?)
Ideally, you would simply 'USE SPAMASSASSIN SCORING=YES', set your threshold as such (as you do in SA) and leave all HMS scoring and testing turned off (with exception of you having a specific surbl/dnsbl test that you KNOW your SA rules are not covering).
5.7 on test.
AV:
SpamassassinForWindows 3.4.0spamd service
AV:
Clamwin+Clamdservice +sanesecurity defs: https://www.hmailserver.com/forum/viewtopic.php?f=21&t=26829
Spam scoring is a regional thing, I don't see the same SPAM as everyone else thus my rules should ideally be different and scored differently.
Unless you are a SpamAssassin expert (or sufficiently nerdy) you can create your own rules to grade scoring to match your environment - most choose not to and rely on pre-built rules that are updated based on a world average of SPAM, simply based on lack of time to maintain those rules. SPAM is evolving all the time and so should the rules to catch it.
Most spammers break rules to deliver SPAM in the most efficient way possible - because someone is paying them - and as we all know; Time is Money.
- GreyListing is a powerfull tool - This is where "Time is Money" become important.
- SPF
- DKIM make everything just a bit more reliable.
- HELO is unfortunately not a reliable way to identify spammers as more home users are "in-sourcing" their mail systems.
- MX... Well, this is where WE can break the rules. It is not a requirement to have MX records according to the RFC's. But, we believe any respectable IT department would have them, if for nothing else than to fight SPAM with a blackhole.MX setup.
- RBL's and SURBL's is a matter of choice. Find one (or more) you trust and "get an opinion from a trusted source".
SpamAssassin will do nearly
Add it all together, and we'll get a pretty good picture of what is SPAM, and what is HAM.
For my part I rate everything as 3, SpamAssassin triggers at 3.0. Anything 3 or above is marked as SPAM, moved into the users SPAM folder, and forwarded to a dedicated SPAM user for further analysis - if needed. Only SPAM scored above 100 is deleted/rejected.
False-positives are added to a hMail rule-based whitelist to prevent them being treated as SPAM - however they will still be marked as SPAM.
Each users SPAM folder and INBOX folder is processed every night to maintain the Bayesian database used by SpamAssassin. The intended rationale is to "localize" SpamAssassin to my neck of the woods. Also the users are able to partly influence classification by moving emails between the two folders for processing the next night (actually, they have a 30 day window).
Despite all efforts I do get SPAM that only SpamAssassin catch... Spammers are getting increasingly cleverer.
As I posted earlier...
I did a quick search on my server for the highest SPAM score in the last 6 months... 66.2 and it passed ALL of the other SPAM tests in hMailServer (Greylist, SPF, DKIM, HELO, MX, 4 RBL's and SURBL), except SpamAssassin ...
Unless you are a SpamAssassin expert (or sufficiently nerdy) you can create your own rules to grade scoring to match your environment - most choose not to and rely on pre-built rules that are updated based on a world average of SPAM, simply based on lack of time to maintain those rules. SPAM is evolving all the time and so should the rules to catch it.
Most spammers break rules to deliver SPAM in the most efficient way possible - because someone is paying them - and as we all know; Time is Money.
- GreyListing is a powerfull tool - This is where "Time is Money" become important.
- SPF
WASa powerfull tool, it's use is increasing amongst spammers.
- DKIM make everything just a bit more reliable.
- HELO is unfortunately not a reliable way to identify spammers as more home users are "in-sourcing" their mail systems.
- MX... Well, this is where WE can break the rules. It is not a requirement to have MX records according to the RFC's. But, we believe any respectable IT department would have them, if for nothing else than to fight SPAM with a blackhole.MX setup.
- RBL's and SURBL's is a matter of choice. Find one (or more) you trust and "get an opinion from a trusted source".
SpamAssassin will do nearly
all of the above, maybe not with the scoring we'd like to use, but then we can add them to hMailServer ourselves. It's like fine tuning SpamAssassin, outside of SpamAssassin.
Add it all together, and we'll get a pretty good picture of what is SPAM, and what is HAM.
For my part I rate everything as 3, SpamAssassin triggers at 3.0. Anything 3 or above is marked as SPAM, moved into the users SPAM folder, and forwarded to a dedicated SPAM user for further analysis - if needed. Only SPAM scored above 100 is deleted/rejected.
False-positives are added to a hMail rule-based whitelist to prevent them being treated as SPAM - however they will still be marked as SPAM.
Each users SPAM folder and INBOX folder is processed every night to maintain the Bayesian database used by SpamAssassin. The intended rationale is to "localize" SpamAssassin to my neck of the woods. Also the users are able to partly influence classification by moving emails between the two folders for processing the next night (actually, they have a 30 day window).
Despite all efforts I do get SPAM that only SpamAssassin catch... Spammers are getting increasingly cleverer.
As I posted earlier...
I did a quick search on my server for the highest SPAM score in the last 6 months... 66.2 and it passed ALL of the other SPAM tests in hMailServer (Greylist, SPF, DKIM, HELO, MX, 4 RBL's and SURBL), except SpamAssassin ...
SørenR.
Word used by programmers when they do not want to explain what they did.
Algorithm(noun.)
Word used by programmers when they do not want to explain what they did.
Just for info, my SA has a mark threshold of 3 and that's when mail gets marked as [SPAM].
Anything that reaches HMS over a score of 7 is automatically deleted - remaining unseen and unknown (technically not true - it gets moved to Trash folder straight away by a rule, so viewable IF you want to go there and see it).
I would say that 98% (if not more) of my mail comes in clean and unmarked or deleted as definite spam correctly. The other 2% gets marked as spam (by SA) but remains genuine (usually because the mail comes in sent with full capitals subjects and/or body content - spamassassin dont like that.)
Anything that reaches HMS over a score of 7 is automatically deleted - remaining unseen and unknown (technically not true - it gets moved to Trash folder straight away by a rule, so viewable IF you want to go there and see it).
I would say that 98% (if not more) of my mail comes in clean and unmarked or deleted as definite spam correctly. The other 2% gets marked as spam (by SA) but remains genuine (usually because the mail comes in sent with full capitals subjects and/or body content - spamassassin dont like that.)
5.7 on test.
AV:
SpamassassinForWindows 3.4.0spamd service
AV:
Clamwin+Clamdservice +sanesecurity defs: https://www.hmailserver.com/forum/viewtopic.php?f=21&t=26829
Spam fighting is an art and a never ending battle. I feel like all available spam technologies should be employed. I like the design of being able to deploy anti-spam technologies in a score fashion and add everything together to make the final decision.
Not too many people have chimed in one way or the other. If I am the only one who really feels strongly that all methods should add together to one final score, then I'll just modify the source to behave like I want. After inspecting the source, it seems this change would be rather easy to make. I was really hoping the developer and community would feel as strongly as I do...as I hate maintaining a forked project.
It looks as though I could also write a script to grab the SpamAssassin score and the HMS score and add them together myself in the situations where HMS doesn't add them itself.
Not too many people have chimed in one way or the other. If I am the only one who really feels strongly that all methods should add together to one final score, then I'll just modify the source to behave like I want. After inspecting the source, it seems this change would be rather easy to make. I was really hoping the developer and community would feel as strongly as I do...as I hate maintaining a forked project.
It looks as though I could also write a script to grab the SpamAssassin score and the HMS score and add them together myself in the situations where HMS doesn't add them itself.
Martin doesn't spend a lot of time any more on the forummattg wrote:I'd think that NOT doing that is a bug, and that this should be added to the issue tracker at https://github.com/hmailserver/hmailserver/issues
Just 'cause I link to a page and say little else doesn't mean I am not being nice.
https://www.hmailserver.com/documentation
https://www.hmailserver.com/documentation
If you do change the source code, you could try submitting it to Martin for review. You may get lucky and have it included in the release.superman20 wrote:Spam fighting is an art and a never ending battle. I feel like all available spam technologies should be employed. I like the design of being able to deploy anti-spam technologies in a score fashion and add everything together to make the final decision.
Not too many people have chimed in one way or the other. If I am the only one who really feels strongly that all methods should add together to one final score, then I'll just modify the source to behave like I want. After inspecting the source, it seems this change would be rather easy to make. I was really hoping the developer and community would feel as strongly as I do...as I hate maintaining a forked project.
It looks as though I could also write a script to grab the SpamAssassin score and the HMS score and add them together myself in the situations where HMS doesn't add them itself.
Scripting is quite easy. I have my Backup-MX hosted with my ISP and they use a round-robin approach to DNS so the HELO check fails on 2 of 3 rDNS lookups also the DKIM check fails for obvious reasons, so I rewrite/recalculate the "X-hMailServer-Spam" and "X-hMailServer-Reason-Score" headers in those cases.
SørenR.
Word used by programmers when they do not want to explain what they did.
Algorithm(noun.)
Word used by programmers when they do not want to explain what they did.
(Sorry I missed this and didnt answer earlier.)
There is nothing unlikely about this scenario for people that are using such geo-blocking (as I am).
In my example, I showed a scenario where a message ended up as MINUS score. Now, lets say that it was sent from china (it wssnt but it could have been). Still genuine, still allowed, not technically spam (hence its score). BUT....I have a DNSBL rule (zz.countries.nerd.dk) that scores anything coming from china a value of 8 which would be enough to reject this email due to hitting my 'delete' threshold of 8 (because I dont want anything from china). Any yet, in this example CLEARLY it would have been allowed in because -4.4+8 is only 3.6 = FAILED.superman20 wrote:jimimaseye, your example that scored negative is a bit biased. Part of the negative contribution was because the mail is in some trusted whitelist databases. In my experience, you don't find the same IP's and domains both in blacklists AND whitelists....so, while not impossible, it is statistically unlikely the any DNSBL or SURBL hits would have fired for your example.
There is nothing unlikely about this scenario for people that are using such geo-blocking (as I am).
5.7 on test.
AV:
SpamassassinForWindows 3.4.0spamd service
AV:
Clamwin+Clamdservice +sanesecurity defs: https://www.hmailserver.com/forum/viewtopic.php?f=21&t=26829
jimimaseye, I certainly appreciate the point you're trying to make, but your new example is a bit contradictory. You have negative points because your e-mail hits some whitelists and some positive points because the e-mail hits some blacklists. I don't think any spam configuration would properly deal with that sort of conflicting information. I actually implement your example somewhat but deal with it differently. My settings have e-mail that is geo-located from China to automatically score the reject/delete score...BUT I also make sure that these custom "extreme" rules are run first and when they hit then everything else is short-circuited. This prevents me from allowing a legitimate e-mail from getting any negative points when I want all China e-mail blocked (good or bad).
As is my case. And I dont need any special 'coding'/methods to ensure everything else is shortcircuited as this is just how things work currently. As my geoblock DNSBL is in the HMS and would hit the threshold it then gets rejected immediately and not passed to SA (delivery refused). However, your earlier suggestion is that everything should be added together so logically you wouldnt be able to shortcut because after the HMS performs its internal checks it would HAVE to call SA and get its scoring before it can concluded and react on its final scoring.superman20 wrote:BUT I also make sure that these custom "extreme" rules are run first and when they hit then everything else is short-circuited.
You cant have it both ways.
5.7 on test.
AV:
SpamassassinForWindows 3.4.0spamd service
AV:
Clamwin+Clamdservice +sanesecurity defs: https://www.hmailserver.com/forum/viewtopic.php?f=21&t=26829
You can somewhat have it both ways if you have spam systems that works together and not independently. Spam checking can definitely stop as soon as the delete threshold is reached. So if the HMS implemented checks hit the delete threshold, then there is no need to call any other checks. However, you must keep going down the chain calling all checks until the delete threshold is reached. Spam testing will never be absolute which is why I strongly feel that it must always be additive. You are adding probabilities and confidence levels that something is spam. If your spam level is 5 and HMS scores 4 and SpamAssassin scores 4 (and assuming a sensible setup where there are no redundant tests), then each one independently says NOT spam, but I'd be willing to bet that it is spam in almost all of those situations.
the problem is that you are suggesting that two totally separate system, each with their own intensities and complexities (with SA being WAY more advanced than HMS), are somewhat 'collated' and the scores shared despite one being the little runt of the spam checking fraternity wholst the other one being the guru. Even if HMS scoring would allow negative scoring, it would make a LITTLE (just) more advanced and more like SA capabilities but it doesnt (SA realises there are positives, and then reasons to double check to apply negatives to counteract - something that HMS spam checking doesnt).
I maintain the two are TOTALLY separate systems (one being written and designed by HMS author and the other being created and written by unrelated entities who HMS author has no control over), it was designed to be that way (use on or the other but not both (although it wont stop you)) and was designed that way for a reason (recognition that SA will do the job a LOT better than HMS can ever dream of). Using ACTUAL SA scores and adding them to HMS scoring of internal checks wouldnt make sense because SA's idea of what scoring values work and what they should be are coded, tested, retested, fine-tuned, modified and implemented after a retest again. HMS scoring is simply (usually) choose a number, an INTEGER only at that, and apply it. Example, SA SPF fail check might score only 0.2 whereas in HMS its default is 2 or 3. Well how can that work together? Still, it will let you but only once it has taken the advice from the guru of spam-checking (Spamassassin) to decide on whether there is any REAL threat.
Thats my view anyway.
I maintain the two are TOTALLY separate systems (one being written and designed by HMS author and the other being created and written by unrelated entities who HMS author has no control over), it was designed to be that way (use on or the other but not both (although it wont stop you)) and was designed that way for a reason (recognition that SA will do the job a LOT better than HMS can ever dream of). Using ACTUAL SA scores and adding them to HMS scoring of internal checks wouldnt make sense because SA's idea of what scoring values work and what they should be are coded, tested, retested, fine-tuned, modified and implemented after a retest again. HMS scoring is simply (usually) choose a number, an INTEGER only at that, and apply it. Example, SA SPF fail check might score only 0.2 whereas in HMS its default is 2 or 3. Well how can that work together? Still, it will let you but only once it has taken the advice from the guru of spam-checking (Spamassassin) to decide on whether there is any REAL threat.
Thats my view anyway.
5.7 on test.
AV:
SpamassassinForWindows 3.4.0spamd service
AV:
Clamwin+Clamdservice +sanesecurity defs: https://www.hmailserver.com/forum/viewtopic.php?f=21&t=26829
Just using some quiet time to implement SpamAssassin
New Windows 10 Pro machine. Enabled HyperV Server and created a new Ubuntu Server install (running on one core and 512 MB RAM) to run SpamAssassin, ClamAV as per >> viewtopic.php?f=21&t=29053
I've found SA marks far lower than I would like.
Playing with SA rules is deep nerdy stuff. I don't want to go and re-score all tests, and potentially break updates, so I have created a new SA rule that simply adds 2.2 to all SA scores. I have set SA to mark as spam if 3 or higher, without changing the subject.
My existing hMailserver AntiSPAM was working pretty good, with a mark at 5 and delete at 60
I've added ClamAV scores including the Sane-Security databases to SA. Currently Mail gets scanned twice by ClamAV, once to score, and then to categorically detect virus - I am watching to see how that works out.
Also using lots of additional databases and filters that I have found here.
Still looking for a good GeoIP addition for Spamassassin.
Still fine-tuning, but catching more SPAM without catching more HAM
New Windows 10 Pro machine. Enabled HyperV Server and created a new Ubuntu Server install (running on one core and 512 MB RAM) to run SpamAssassin, ClamAV as per >> viewtopic.php?f=21&t=29053
I've found SA marks far lower than I would like.
Playing with SA rules is deep nerdy stuff. I don't want to go and re-score all tests, and potentially break updates, so I have created a new SA rule that simply adds 2.2 to all SA scores. I have set SA to mark as spam if 3 or higher, without changing the subject.
My existing hMailserver AntiSPAM was working pretty good, with a mark at 5 and delete at 60
I've added ClamAV scores including the Sane-Security databases to SA. Currently Mail gets scanned twice by ClamAV, once to score, and then to categorically detect virus - I am watching to see how that works out.
Also using lots of additional databases and filters that I have found here.
Still looking for a good GeoIP addition for Spamassassin.
Still fine-tuning, but catching more SPAM without catching more HAM
Just 'cause I link to a page and say little else doesn't mean I am not being nice.
https://www.hmailserver.com/documentation
https://www.hmailserver.com/documentation
@mattg
A nice source of DNB's lists ready to use with SpamAssassin is listed here:
http://www.intra2net.com/en/support/antispam/index.php
A nice source of DNB's lists ready to use with SpamAssassin is listed here:
http://www.intra2net.com/en/support/antispam/index.php
CIDR to RegEx:
DNS Lookup:
DNSBL Lookup:
GEOIP Lookup:
d-fault.nl/CIDRtoRegEx
DNS Lookup:
d-fault.nl/DNSTools
DNSBL Lookup:
d-fault.nl/DNSBLLookup
GEOIP Lookup:
d-fault.nl/GeoipLookup
Could someone post a copy of your spamassassin local.cf with your preferred rules to allow spamassassin do all of the dnsbl and uribl tests . I would like to move all the spam test to spamassassin for better implementation of the scoring and remove it out of hmailserver. It is very confusing when the 2 scoring systems either do not add together or counter each other. I say let one system score for spam and maybe hmailserver do the early spf and dns tests unless spamassassin can do those as well. Not being able to fine tune hmailserver except in whole number integers also skews the scoring. 4.9 is truncated to 4. Thank you
My setup is here: viewtopic.php?f=21&t=28133 (personal settings are in the 2nd post). You will see I simply set a 'tagged by SA' as 5 in line with the builtin antispam tests. (You dont have to use SA's scoring system).
You can simply exclusive use SA if you wish by just disabling any builtin antispam tests (DNSBL's etc). I personally (as you will see) use a combination of both. SA does a far better job of antispam testing so you can be confident that in the main if HMS would find it then SA has already found it...and then some.
You can simply exclusive use SA if you wish by just disabling any builtin antispam tests (DNSBL's etc). I personally (as you will see) use a combination of both. SA does a far better job of antispam testing so you can be confident that in the main if HMS would find it then SA has already found it...and then some.
5.7 on test.
AV:
SpamassassinForWindows 3.4.0spamd service
AV:
Clamwin+Clamdservice +sanesecurity defs: https://www.hmailserver.com/forum/viewtopic.php?f=21&t=26829
Thank you for your post. I had seen that setup but I was hoping that there would be a way of just controlling the dnsbl and uribl tests in the local.cf or another file without having to get into all the scripting. Not being a programmer, scripting get confusing if you do not use it all the time, at least for me. Is there a way to set it in the local.cf? Thank you for the help.
In c$\SpamAssassin\share\3.004000\updates_spamassassin_org you will find two files;
The clever thing with SpamAssassin is that it reads all config files alphabetically ... So if you copy these two files to c$\SpamAssassin\etc\spamassassin and name them
Anyways, take a look at the files and you'll get the idea how to build your own lists.
***** Example *****
I have a config (KAM.cf) about 288 kb... There is this one rule ...
that check the entire message incl. attachments. If someone send an email to me with a PDF file in it, it usually takes 300+ seconds and then hMail fails.
I have created an extra config (KAM-fix.cf) containing only this rule
where "full" is replaced with "body" in line 4. Since KAM.cf is read first and then KAM-fix.cf, it changes the rule. Now everything passes in less than 10 seconds. - And I don't have to create a script to alter the file every time it is auto-updated.
20_dnsbl_tests.cfand25_uribl.cf
The clever thing with SpamAssassin is that it reads all config files alphabetically ... So if you copy these two files to c$\SpamAssassin\etc\spamassassin and name them
my_dnsbl_tests.cfandmy_uribl.cfyou can modify them all you want or change the score as they are read AFTER the originals.
Anyways, take a look at the files and you'll get the idea how to build your own lists.
***** Example *****
I have a config (KAM.cf) about 288 kb... There is this one rule ...
Code: Select all
#Bad UTF-8 content type and transfer encoding - Thanks to Pedro David Marco for alerting to issue
header __KAM_BAD_UTF8_1 Content-Type =~ /text\/html; charset=\"utf-8\"/i
header __KAM_BAD_UTF8_2 Content-Transfer-Encoding =~ /base64/i
full __RW_BAD_UTF8_3 /^(?:[^\n]|\n(?!\n))*\nContent-Transfer-Encoding:\s+base64(?:[^\n]|\n(?!\n))*\n\n[\s\n]{0,300}[^\s\n].{0,300}[^a-z0-9+\/=\n][^\s\n]/si
meta KAM_BAD_UTF8 (__KAM_BAD_UTF8_1 + __KAM_BAD_UTF8_2 + __RW_BAD_UTF8_3 >= 3)
score KAM_BAD_UTF8 14.0
describe KAM_BAD_UTF8 Bad Content Type and Transfer Encoding that attempts to evade SA scanning
I have created an extra config (KAM-fix.cf) containing only this rule
Code: Select all
#Bad UTF-8 content type and transfer encoding - Thanks to Pedro David Marco for alerting to issue
header __KAM_BAD_UTF8_1 Content-Type =~ /text\/html; charset=\"utf-8\"/i
header __KAM_BAD_UTF8_2 Content-Transfer-Encoding =~ /base64/i
body __RW_BAD_UTF8_3 /^(?:[^\n]|\n(?!\n))*\nContent-Transfer-Encoding:\s+base64(?:[^\n]|\n(?!\n))*\n\n[\s\n]{0,300}[^\s\n].{0,300}[^a-z0-9+\/=\n][^\s\n]/si
meta KAM_BAD_UTF8 (__KAM_BAD_UTF8_1 + __KAM_BAD_UTF8_2 + __RW_BAD_UTF8_3 >= 3)
score KAM_BAD_UTF8 14.0
describe KAM_BAD_UTF8 Bad Content Type and Transfer Encoding that attempts to evade SA scanning
SørenR.
Word used by programmers when they do not want to explain what they did.
Algorithm(noun.)
Word used by programmers when they do not want to explain what they did.
In your experience with whitelisting and blacklisting is there an easy manageable way to add a whitelist/blacklists in spamassassin instead of hmailserver? I like hmailserver fine but not easy to manage the whitelist and blocking rules when they grow like mine have since trying to get a handle on all the different ways to stop spam but not stop ham. I end up adding the same line or rule again and again. I am sure in your experiences you have said I think there is an easier way to to implement this or manage this. Thanks in advance.
My SpamAssassin is reasonable well trained after 3 years so I have only a few addresses whitelisted in SpamAssassin.kroberts wrote:In your experience with whitelisting and blacklisting is there an easy manageable way to add a whitelist/blacklists in spamassassin instead of hmailserver? I like hmailserver fine but not easy to manage the whitelist and blocking rules when they grow like mine have since trying to get a handle on all the different ways to stop spam but not stop ham. I end up adding the same line or rule again and again. I am sure in your experiences you have said I think there is an easier way to to implement this or manage this. Thanks in advance.
I don't have a blacklist per se... I block emails on multiple levels of identification; body, from, helo and subject, all done in eventhandlers; OnClientConnect(oClient), OnHELO(oClient) and OnAcceptMessage(oClient, oMessage).
80% of what I block is rejected, the rest is marked as SPAM and my daily SpamAssassin training eventually learn the blacklisted emails so I can clean some of the manual blacklist after about 1 month or so.
I check my custom logs every day and adjust filters if needed. Last time was IIRC 2 weeks ago - and I also built a new IDS function to catch brute force IMAPS logon attempts a few days ago.
SørenR.
Word used by programmers when they do not want to explain what they did.
Algorithm(noun.)
Word used by programmers when they do not want to explain what they did.
Here is a snippet of my custom.cf
(I changed the name so that it wasn't overwritten on SpamAssassin upgrade)
(I changed the name so that it wasn't overwritten on SpamAssassin upgrade)
Code: Select all
# Some shortcircuiting, if the plugin is enabled
#
ifplugin Mail::SpamAssassin::Plugin::Shortcircuit
#
# default: strongly-whitelisted mails are *really* whitelisted now, if the
# shortcircuiting plugin is active, causing early exit to save CPU load.
# Uncomment to turn this on
#
shortcircuit USER_IN_WHITELIST on
# shortcircuit USER_IN_DEF_WHITELIST on
shortcircuit USER_IN_ALL_SPAM_TO on
shortcircuit SUBJECT_IN_WHITELIST on
# the opposite; blacklisted mails can also save CPU
#
shortcircuit USER_IN_BLACKLIST on
# shortcircuit USER_IN_BLACKLIST_TO on
# shortcircuit SUBJECT_IN_BLACKLIST on
# if you have taken the time to correctly specify your "trusted_networks",
# this is another good way to save CPU
#
endif # Mail::SpamAssassin::Plugin::Shortcircuit
# don't score URIBL
score URIBL_BLACK 0
score URIBL_RED 0
score URIBL_GREY 0
score URIBL_BLOCKED 0
# DNSBL scores
score URIBL_DBL_SPAM 4
score RCVD_IN_SBL 3
# blacklist from
blacklist_from *.top
blacklist_from *.eu
blacklist_from *.download
blacklist_from *.accountant
blacklist_from *.cf
blacklist_from *.party
blacklist_from *.review
blacklist_from *.faith
blacklist_from *.win
blacklist_from *.trade
blacklist_from *.webcam
blacklist_from *.racing
blacklist_from *.date
blacklist_from *.bid
blacklist_from *.cricket
# whitelist from
whitelist_from *@important_domain.com.au
## BELOW is my ClamAV Integration
## NOT needed for what you are doing
loadplugin ClamAV clamav.pm
full CLAMAV eval:check_clamav()
describe CLAMAV Clam AntiVirus detected something...
score CLAMAV 0.001
# Look for specific types of ClamAV detections
header __CLAMAV_PHISH X-Spam-Virus =~ /Yes.{1,30}Phishing/i
header __CLAMAV_PHISH_HEUR X-Spam-Virus =~ /Yes.{1,30}Phishing\.Heuristics\.Email/
header __CLAMAV_SANE X-Spam-Virus =~ /Yes.{1,30}Sanesecurity/i
header __CLAMAV_MBL X-Spam-Virus =~ /Yes.{1,30}MBL/
header __CLAMAV_MSRBL X-Spam-Virus =~ /Yes.{1,30}MSRBL/
header __CLAMAV_VX X-Spam-Virus =~ /Yes.{1,30}VX\./
# Give the above rules a very late priority so that they can see the output
# of previous rules - otherwise they don't work! Not sure what the correct
# priority should be but this seems to work...
priority __CLAMAV_PHISH 9999
priority __CLAMAV_PHISH_HEUR 9999
priority __CLAMAV_SANE 9999
priority __CLAMAV_MBL 9999
priority __CLAMAV_MSRBL 9999
priority __CLAMAV_VX 9999
# Work out what ClamAV detected and score accordingly
# ClamAV general signatures
meta CLAMAV_VIRUS (CLAMAV && !__CLAMAV_PHISH && !__CLAMAV_SANE && !__CLAMAV_MBL && !__CLAMAV_MSRBL && !__CLAMAV_VX)
describe CLAMAV_VIRUS Virus found by ClamAV default signatures
score CLAMAV_VIRUS 20.0
# ClamAV phishing signatures
meta CLAMAV_PHISH (CLAMAV && __CLAMAV_PHISH && !__CLAMAV_SANE && !__CLAMAV_PHISH_HEUR)
describe CLAMAV_PHISH Phishing email found by ClamAV default signatures
score CLAMAV_PHISH 10.0
# ClamAV phishing with heuristic engine (not signatures based, may lead to false positives)
# Available since ClamAV 0.91
meta CLAMAV_PHISH_HEUR (CLAMAV && __CLAMAV_PHISH_HEUR)
describe CLAMAV_PHISH_HEUR Phishing email found by ClamAV heuristic engine
score CLAMAV_PHISH_HEUR 2.0
# ClamAV SaneSecurity signatures from http://www.sanesecurity.com/clamav/
meta CLAMAV_SANE (CLAMAV && __CLAMAV_SANE)
describe CLAMAV_SANE SPAM found by ClamAV SaneSecurity signatures
score CLAMAV_SANE 15
# ClamAV MBL signatures from http://www.malware.com.br/
meta CLAMAV_MBL (CLAMAV && __CLAMAV_MBL)
describe CLAMAV_MBL Malware found by ClamAV MBL signatures
score CLAMAV_MBL 7.5
# ClamAV MSRBL signatures from http://www.msrbl.com/
meta CLAMAV_MSRBL (CLAMAV && __CLAMAV_MSRBL)
describe CLAMAV_MSRBL SPAM found by ClamAV MSRBL signatures
score CLAMAV_MSRBL 2.0
# ClamAV SecuriteInfo.com VX malware signatures from
# http://www.securiteinfo.com/services/clamav_unofficial_malwares_signatures.shtml
meta CLAMAV_VX (CLAMAV && __CLAMAV_VX)
describe CLAMAV_VX Malware found by SecuriteInfo.com VX signatures
score CLAMAV_VX 5.0
Just 'cause I link to a page and say little else doesn't mean I am not being nice.
https://www.hmailserver.com/documentation
https://www.hmailserver.com/documentation
mattg wrote:Here is a snippet of my custom.cf
(I changed the name so that it wasn't overwritten on SpamAssassin upgrade)
Code: Select all
# Some shortcircuiting, if the plugin is enabled
#
ifplugin Mail::SpamAssassin::Plugin::Shortcircuit
#
# default: strongly-whitelisted mails are *really* whitelisted now, if the
# shortcircuiting plugin is active, causing early exit to save CPU load.
# Uncomment to turn this on
#
shortcircuit USER_IN_WHITELIST on
# shortcircuit USER_IN_DEF_WHITELIST on
shortcircuit USER_IN_ALL_SPAM_TO on
shortcircuit SUBJECT_IN_WHITELIST on
# the opposite; blacklisted mails can also save CPU
#
shortcircuit USER_IN_BLACKLIST on
# shortcircuit USER_IN_BLACKLIST_TO on
# shortcircuit SUBJECT_IN_BLACKLIST on
# if you have taken the time to correctly specify your "trusted_networks",
# this is another good way to save CPU
#
endif # Mail::SpamAssassin::Plugin::Shortcircuit
# don't score URIBL
score URIBL_BLACK 0
score URIBL_RED 0
score URIBL_GREY 0
score URIBL_BLOCKED 0
# DNSBL scores
score URIBL_DBL_SPAM 4
score RCVD_IN_SBL 3
# blacklist from
blacklist_from *.top
blacklist_from *.eu
blacklist_from *.download
blacklist_from *.accountant
blacklist_from *.cf
blacklist_from *.party
blacklist_from *.review
blacklist_from *.faith
blacklist_from *.win
blacklist_from *.trade
blacklist_from *.webcam
blacklist_from *.racing
blacklist_from *.date
blacklist_from *.bid
blacklist_from *.cricket
# whitelist from
whitelist_from *@important_domain.com.au
## BELOW is my ClamAV Integration
## NOT needed for what you are doing
loadplugin ClamAV clamav.pm
full CLAMAV eval:check_clamav()
describe CLAMAV Clam AntiVirus detected something...
score CLAMAV 0.001
# Look for specific types of ClamAV detections
header __CLAMAV_PHISH X-Spam-Virus =~ /Yes.{1,30}Phishing/i
header __CLAMAV_PHISH_HEUR X-Spam-Virus =~ /Yes.{1,30}Phishing\.Heuristics\.Email/
header __CLAMAV_SANE X-Spam-Virus =~ /Yes.{1,30}Sanesecurity/i
header __CLAMAV_MBL X-Spam-Virus =~ /Yes.{1,30}MBL/
header __CLAMAV_MSRBL X-Spam-Virus =~ /Yes.{1,30}MSRBL/
header __CLAMAV_VX X-Spam-Virus =~ /Yes.{1,30}VX\./
# Give the above rules a very late priority so that they can see the output
# of previous rules - otherwise they don't work! Not sure what the correct
# priority should be but this seems to work...
priority __CLAMAV_PHISH 9999
priority __CLAMAV_PHISH_HEUR 9999
priority __CLAMAV_SANE 9999
priority __CLAMAV_MBL 9999
priority __CLAMAV_MSRBL 9999
priority __CLAMAV_VX 9999
# Work out what ClamAV detected and score accordingly
# ClamAV general signatures
meta CLAMAV_VIRUS (CLAMAV && !__CLAMAV_PHISH && !__CLAMAV_SANE && !__CLAMAV_MBL && !__CLAMAV_MSRBL && !__CLAMAV_VX)
describe CLAMAV_VIRUS Virus found by ClamAV default signatures
score CLAMAV_VIRUS 20.0
# ClamAV phishing signatures
meta CLAMAV_PHISH (CLAMAV && __CLAMAV_PHISH && !__CLAMAV_SANE && !__CLAMAV_PHISH_HEUR)
describe CLAMAV_PHISH Phishing email found by ClamAV default signatures
score CLAMAV_PHISH 10.0
# ClamAV phishing with heuristic engine (not signatures based, may lead to false positives)
# Available since ClamAV 0.91
meta CLAMAV_PHISH_HEUR (CLAMAV && __CLAMAV_PHISH_HEUR)
describe CLAMAV_PHISH_HEUR Phishing email found by ClamAV heuristic engine
score CLAMAV_PHISH_HEUR 2.0
# ClamAV SaneSecurity signatures from http://www.sanesecurity.com/clamav/
meta CLAMAV_SANE (CLAMAV && __CLAMAV_SANE)
describe CLAMAV_SANE SPAM found by ClamAV SaneSecurity signatures
score CLAMAV_SANE 15
# ClamAV MBL signatures from http://www.malware.com.br/
meta CLAMAV_MBL (CLAMAV && __CLAMAV_MBL)
describe CLAMAV_MBL Malware found by ClamAV MBL signatures
score CLAMAV_MBL 7.5
# ClamAV MSRBL signatures from http://www.msrbl.com/
meta CLAMAV_MSRBL (CLAMAV && __CLAMAV_MSRBL)
describe CLAMAV_MSRBL SPAM found by ClamAV MSRBL signatures
score CLAMAV_MSRBL 2.0
# ClamAV SecuriteInfo.com VX malware signatures from
# http://www.securiteinfo.com/services/clamav_unofficial_malwares_signatures.shtml
meta CLAMAV_VX (CLAMAV && __CLAMAV_VX)
describe CLAMAV_VX Malware found by SecuriteInfo.com VX signatures
score CLAMAV_VX 5.0
Hi Matt,
I know it has already passed few months. I want to know on where is the location to put your custom.cf? Is it under user directory (~\.spamassasin)?
Is it ok to put the whitelist_from rules in local.cf?
Yep a few months, and I've changed since then
I have a whitelist.cf for just my whiteliest entries
I have the KAM rule set in KAM.cf >> https://www.pccc.com/downloads/SpamAssassin/contrib/
I have non-KAM rules from same source
I have a zzLast.cf to negate any rules that auto created from the above two lists
I have a blacklist.cf, a matt.cf, a nerds.cf (does the country of origin stuff) and more.
Seems you can have multiple .cf files, and they ALL get read individually
All of these are in my /etc/spamassassin/ folder on my UBUNTU system. I don't use the Jam Software windows variant of SpamAssassin
I have a whitelist.cf for just my whiteliest entries
I have the KAM rule set in KAM.cf >> https://www.pccc.com/downloads/SpamAssassin/contrib/
I have non-KAM rules from same source
I have a zzLast.cf to negate any rules that auto created from the above two lists
I have a blacklist.cf, a matt.cf, a nerds.cf (does the country of origin stuff) and more.
Seems you can have multiple .cf files, and they ALL get read individually
All of these are in my /etc/spamassassin/ folder on my UBUNTU system. I don't use the Jam Software windows variant of SpamAssassin
Just 'cause I link to a page and say little else doesn't mean I am not being nice.
https://www.hmailserver.com/documentation
https://www.hmailserver.com/documentation
Its similar on the windows version. In JAM the local.cf is found in (default)mattg wrote: All of these are in my /etc/spamassassin/ folder on my UBUNTU system. I don't use the Jam Software windows variant of SpamAssassin
. Place other custom .CFs here too.
C:\Program Files\JAM Software\SpamAssassin for Windows\etc\spamassassin
5.7 on test.
AV:
SpamassassinForWindows 3.4.0spamd service
AV:
Clamwin+Clamdservice +sanesecurity defs: https://www.hmailserver.com/forum/viewtopic.php?f=21&t=26829
The file path.config in the SpamAssassin directory will specify locations..jimimaseye wrote:Its similar on the windows version. In JAM the local.cf is found in (default)mattg wrote: All of these are in my /etc/spamassassin/ folder on my UBUNTU system. I don't use the Jam Software windows variant of SpamAssassin. Place other custom .CFs here too.C:\Program Files\JAM Software\SpamAssassin for Windows\etc\spamassassin
Code: Select all
DEF_RULES_DIR=./share/spamassassin
LOCAL_RULES_DIR=./etc/spamassassin
LOCAL_STATE_DIR=./share
SørenR.
Word used by programmers when they do not want to explain what they did.
Algorithm(noun.)
Word used by programmers when they do not want to explain what they did.
|
crypt.crypt() refuses non-ASCII passwords
On pypy-3.6-v7.3.2, the zope.password test suite fails as follows:
Failure in test CryptPasswordManager (zope.password.legacy)
Failed doctest test for zope.password.legacy.CryptPasswordManager
File "/home/cjwatson/src/python/zope.password/.tox/pypy3/site-packages/zope/password/legacy.py", line 36, in CryptPasswordManager
----------------------------------------------------------------------
File "/home/cjwatson/src/python/zope.password/.tox/pypy3/site-packages/zope/password/legacy.py", line 52, in zope.password.legacy.CryptPasswordManager
Failed example:
encoded = manager.encodePassword(password, salt="..")
Exception raised:
Traceback (most recent call last):
File "/home/cjwatson/src/python/pyenv/versions/pypy3.6-7.3.2/lib-python/3/doctest.py", line 1332, in __run
compileflags, 1), test.globs)
File "<doctest zope.password.legacy.CryptPasswordManager[6]>", line 1, in <module>
encoded = manager.encodePassword(password, salt="..")
File "/home/cjwatson/src/python/zope.password/.tox/pypy3/site-packages/zope/password/legacy.py", line 118, in encodePassword
return '{CRYPT}%s' % crypt(password, salt)
File "/home/cjwatson/src/python/pyenv/versions/pypy3.6-7.3.2/lib-python/3/crypt.py", line 47, in crypt
return _crypt.crypt(word, salt)
File "/home/cjwatson/src/python/pyenv/versions/pypy3.6-7.3.2/lib_pypy/_crypt/__init__.py", line 19, in crypt
word = word.encode('ascii')
UnicodeEncodeError: 'ascii' codec can't encode character '\u0410' in position 6: ordinal not in range(128)
----------------------------------------------------------------------
File "/home/cjwatson/src/python/zope.password/.tox/pypy3/site-packages/zope/password/legacy.py", line 53, in zope.password.legacy.CryptPasswordManager
Failed example:
encoded
Exception raised:
Traceback (most recent call last):
File "/home/cjwatson/src/python/pyenv/versions/pypy3.6-7.3.2/lib-python/3/doctest.py", line 1332, in __run
compileflags, 1), test.globs)
File "<doctest zope.password.legacy.CryptPasswordManager[7]>", line 1, in <module>
encoded
NameError: name 'encoded' is not defined
----------------------------------------------------------------------
File "/home/cjwatson/src/python/zope.password/.tox/pypy3/site-packages/zope/password/legacy.py", line 55, in zope.password.legacy.CryptPasswordManager
Failed example:
manager.match(encoded)
Exception raised:
Traceback (most recent call last):
File "/home/cjwatson/src/python/pyenv/versions/pypy3.6-7.3.2/lib-python/3/doctest.py", line 1332, in __run
compileflags, 1), test.globs)
File "<doctest zope.password.legacy.CryptPasswordManager[8]>", line 1, in <module>
manager.match(encoded)
NameError: name 'encoded' is not defined
----------------------------------------------------------------------
File "/home/cjwatson/src/python/zope.password/.tox/pypy3/site-packages/zope/password/legacy.py", line 57, in zope.password.legacy.CryptPasswordManager
Failed example:
manager.checkPassword(encoded, password)
Exception raised:
Traceback (most recent call last):
File "/home/cjwatson/src/python/pyenv/versions/pypy3.6-7.3.2/lib-python/3/doctest.py", line 1332, in __run
compileflags, 1), test.globs)
File "<doctest zope.password.legacy.CryptPasswordManager[9]>", line 1, in <module>
manager.checkPassword(encoded, password)
NameError: name 'encoded' is not defined
----------------------------------------------------------------------
File "/home/cjwatson/src/python/zope.password/.tox/pypy3/site-packages/zope/password/legacy.py", line 63, in zope.password.legacy.CryptPasswordManager
Failed example:
isinstance(encoded, str)
Exception raised:
Traceback (most recent call last):
File "/home/cjwatson/src/python/pyenv/versions/pypy3.6-7.3.2/lib-python/3/doctest.py", line 1332, in __run
compileflags, 1), test.globs)
File "<doctest zope.password.legacy.CryptPasswordManager[10]>", line 1, in <module>
isinstance(encoded, str)
NameError: name 'encoded' is not defined
----------------------------------------------------------------------
File "/home/cjwatson/src/python/zope.password/.tox/pypy3/site-packages/zope/password/legacy.py", line 71, in zope.password.legacy.CryptPasswordManager
Failed example:
manager.checkPassword(encoded, password + u"wrong")
Exception raised:
Traceback (most recent call last):
File "/home/cjwatson/src/python/pyenv/versions/pypy3.6-7.3.2/lib-python/3/doctest.py", line 1332, in __run
compileflags, 1), test.globs)
File "<doctest zope.password.legacy.CryptPasswordManager[11]>", line 1, in <module>
manager.checkPassword(encoded, password + u"wrong")
NameError: name 'encoded' is not defined
----------------------------------------------------------------------
File "/home/cjwatson/src/python/zope.password/.tox/pypy3/site-packages/zope/password/legacy.py", line 76, in zope.password.legacy.CryptPasswordManager
Failed example:
manager.checkPassword(encoded, 'completely wrong')
Exception raised:
Traceback (most recent call last):
File "/home/cjwatson/src/python/pyenv/versions/pypy3.6-7.3.2/lib-python/3/doctest.py", line 1332, in __run
compileflags, 1), test.globs)
File "<doctest zope.password.legacy.CryptPasswordManager[12]>", line 1, in <module>
manager.checkPassword(encoded, 'completely wrong')
NameError: name 'encoded' is not defined
----------------------------------------------------------------------
File "/home/cjwatson/src/python/zope.password/.tox/pypy3/site-packages/zope/password/legacy.py", line 89, in zope.password.legacy.CryptPasswordManager
Failed example:
encoded
Expected:
'{CRYPT}erz50QD3gv4Dw'
Got:
"{CRYPT}b'erz50QD3gv4Dw'"
----------------------------------------------------------------------
File "/home/cjwatson/src/python/zope.password/.tox/pypy3/site-packages/zope/password/legacy.py", line 92, in zope.password.legacy.CryptPasswordManager
Failed example:
manager.checkPassword(encoded, password)
Expected:
True
Got:
False
The Python 3 documentation doesn't say anything particular about the word argument: it can be any str, and experimentally it seems to be simply encoded as UTF-8 (i.e. crypt.crypt('\u0410', '..') on Python 3 returns the same string as crypt.crypt(u'\u0410'.encode('UTF-8'), '..') on Python 2).
Perhaps relatedly, you can also see in the test output above that crypt.crypt returns bytes when for compatibility with Python 3 it should return str.
This seems to be a regression in pypy-3.6-v7.2.0. pypy-3.6-v7.1.1 worked fine, while everything from pypy-3.6-v7.2.0 to at least pypy-3.6-v7.3.2 fails. (pypy-3.6-v7.3.3 fails more spectacularly, because import crypt is broken there.) You can test it by cloning zope.password (I currently have commit 8e1dd55e4c) and running tox -e pypy3 with appropriate versions on $PATH.
|
Locale-specific attributes | Content bean content hierarchy
The content nodes have two locale-specific attributes, title and description. See the bean help for alternative, shorter names for these attributes. The title must be defined for each locale, though you can set it to the empty string. A new locale is defined by setting the title for it.
# example for manipulating locales of a content node:
# select node, remove all, import, add one manually
Content.select(ID)
Content.empty("locales")
Content.nlsimport("nls/content.nls", "page.visualization")
Content.nlsset("title", "en_GB", "Visualisation")
Content.nlsset("description", "en_GB", "A page for...")
# example for manipulating locales of a content node:
# select node, remove all, import, add one manually
$Content select ID
$Content empty locales
$Content nlsimport nls/content.nls page.visualization
$Content nlsset title en_GB "Visualisation"
$Content nlsset description en_GB "A page for..."
|
How to make an unblocker
How to make an unblocker
I've seen a lot of unblockers out there, but none of them are mine so here's mine.
Concept
The basic concept of an unblocker is that we will utilize repl.it's virtual machine and their internet to provide not only an unblocker, but a vpn! (in california or smth)
Starting out
After you make your repl, we will hack together a .replit file.
run = "pip install webbot ; clear ; python main.py"
Make sure you have a main.py if you are using a bash repl!
What this one-liner does is simply install webbot (our app to open up web browser), clear the screen of the installing notices, and then run main.py!
Our first window
Add this code into your main.py:
from webbot import Browser
web = Browser()
web.go_to("https://repl.it/")
while True: pass
What this code does is that from webbot (the previously mentioned library) we import the Browser class. We create an instance web, and we use a method, go_to which goes to a URL. The while loop is to stop the program from exiting, so you can actually interact with the website.
Wrapping up
Alright, lets make this user friendly now!
from webbot import Browser
web = Browser()
web.go_to(input("website : "))
while True:
web.type(input("your key : "))
First, we still have the same code, but now it will go to a website that the user specifies! Afterwards, inside our while loop we will make it so that it takes user input and then types it in to the website! (Like simulating a key press!)
Take a look:
Happy unblocking!!
Closing
This is not only what webbot can do. You can also automate many more things! Read the documentation to learn more!
|
Intro
All my spouse’s digital photo frames are either broken or nearly broken – probably she got them from garage sales. Regardless, they spend 99% of the the time black. Now, since I had bought that Raspberry Pi PiDisplay awhile back, and it is underutilized, and I know a thing or two about linux, I felt I could create a custom photo frame with things I already have lying around – a Raspberry Pi 3, a PiDisplay, and my personal Google Drive. We make a point to copy all our cameras’ pictures onto the Google Drive, which we do the old-fashioned, by-hand way. After 17 years of digital photos we have about 40,000 of them, over 200 GB.
So I also felt obliged to create features you will never have in a commercial product, to make the effort worthwhile. I thought, what about randomly picking a few for display from amongst all the pictures, displaying that subset for a few days, and then moving on to a new randomly selected sample of images, etc? That should produce a nice review of all of them over time, eventually. You need an approach like that because you will never get to the end if you just try to display 40000 images in order!
Equipment
This work was done on a Raspberry Pi 3 running Raspbian Lite (more on that later). I used a display custom-built for the RPi, Amazon.com: Raspberry Pi 7″ Touch Screen Display: Electronics), though I believe any HDMI display would do.
The scripts
Here is the master file which I call master.sh.
#!/bin/sh
# DrJ 8/2019
# call this from cron once a day to refesh random slideshow once a day
RANFILE=”random.list”
NUMFOLDERS=20
DISPLAYFOLDER=”/home/pi/Pictures”
DISPLAYFOLDERTMP=”/home/pi/Picturestmp”
SLEEPINTERVAL=3
DEBUG=1
STARTFOLDER=”MaryDocs/Pictures and videos”
echo “Starting master process at “`date`
rm -rf $DISPLAYFOLDERTMP
mkdir $DISPLAYFOLDERTMP
#listing of all Google drive files starting from the picture root
if [ $DEBUG -eq 1 ]; then echo Listing all files from Google drive; fi
rclone ls remote:”$STARTFOLDER” > files
# filter down to only jpegs, lose the docs folders
if [ $DEBUG -eq 1 ]; then echo Picking out the JPEGs; fi
egrep ‘\.[jJ][pP][eE]?[gG]$’ files |awk ‘{$1=””; print substr($0,2)}’|grep -i -v /docs/ > jpegs.list
# throw NUMFOLDERS or so random numbers for picture selection, select triplets of photos by putting
# names into a file
if [ $DEBUG -eq 1 ]; then echo Generate random filename triplets; fi
./random-files.pl -f $NUMFOLDERS -j jpegs.list -r $RANFILE
# copy over these 60 jpegs
if [ $DEBUG -eq 1 ]; then echo Copy over these random files; fi
cat $RANFILE|while read line; do
rclone copy remote:”${STARTFOLDER}/$line” $DISPLAYFOLDERTMP
sleep $SLEEPINTERVAL
done
# rotate pics as needed
if [ $DEBUG -eq 1 ]; then echo Rotate the pics which need it; fi
cd $DISPLAYFOLDERTMP; ~/rotate-as-needed.sh
cd ~
# kill any qiv slideshow
if [ $DEBUG -eq 1 ]; then echo Killing old qiv and fbi slideshow; fi
pkill -9 -f qiv
sudo pkill -9 -f fbi
pkill -9 -f m2.pl
# remove old pics
if [ $DEBUG -eq 1 ]; then echo Removing old pictures; fi
rm -rf $DISPLAYFOLDER
mv $DISPLAYFOLDERTMP $DISPLAYFOLDER
#run looping fbi slideshow on these pictures
if [ $DEBUG -eq 1 ]; then echo Start fbi slideshow in background; fi
cd $DISPLAYFOLDER ; nohup ~/m2.pl >> ~/m2.log 2>&1 &
if [ $DEBUG -eq 1 ]; then echo “And now it is “`date`; fi
I call the following script random-files.pl:
#!/usr/bin/perl
use Getopt::Std;
my %opt=();
getopts("c:df:j:r:",\%opt);
$nofolders = $opt{f} ? $opt{f} : 20;
$DEBUG = $opt{d} ? 1 : 0;
$cutoff = $opt{c} ? $opt{c} : 5;
$cutoffS = 60*$cutoff;
$jpegs = $opt{j} ? $opt{j} : "jpegs.list";
$ranpicfile = $opt{r} ? $opt{r} : "jpegs-random.list";
print "d,f,j,r: $opt{d}, $opt{f}, $opt{j}, $opt{r}\n" if $DEBUG;
open(JPEGS,$jpegs) || die "Cannot open jpegs listing file $jpegs!!\n";
@jpegs = ;
# remove newline character
$nopics = chomp @jpegs;
open(RAN,"> $ranpicfile") || die "Cannot open random picture file $ranpicfile!!\n";
for($i=0;$i<$nofolders;$i++) {
$t = int(rand($nopics-2));
print "random number is: $t\n" if $DEBUG;
# a lot of our pics follow this naming convention
# 20160831_090658.jpg
($date,$time) = $jpegs[$t] =~ /(\d{8})_(\d{6})/;
if ($date) {
print "date, time: $date $time\n" if $DEBUG;
# ensure neighboring picture is at least five minutes different in time
$iPO = $iP = $diff = 0;
($hr,$min,$sec) = $time =~ /(\d\d)(\d\d)(\d\d)/;
$secs = 3600*$hr + 60*$min + $sec;
print "Pre-pic logic\n";
while ($diff < $cutoffS) {
$iP++;
$priorPic = $jpegs[$t-$iP];
$Pdate = $Ptime = 0;
($Pdate,$Ptime) = $priorPic =~ /(\d{8})_(\d{6})/;
($Phr,$Pmin,$Psec) = $Ptime =~ /(\d\d)(\d\d)(\d\d)/;
$Psecs = 3600*$Phr + 60*$Pmin + $Psec;
print "hr,min,sec,Phr,Pmin,Psec: $hr,$min,$sec,$Phr,$Pmin,$Psec\n" if $DEBUG;
$diff = abs($secs - $Psecs);
print "diff: $diff\n" if $DEBUG;
# end our search if we happened upon different dates
$diff = 99999 if $Pdate ne $date;
}
# post-picture logic - same as pre-picture
print "Post-pic logic\n";
$diff = 0;
while ($diff < $cutoffS) {
$iPO++;
$postPic = $jpegs[$t+$iPO];
$Pdate = $Ptime = 0;
($Pdate,$Ptime) = $postPic =~ /(\d{8})_(\d{6})/;
($Phr,$Pmin,$Psec) = $Ptime =~ /(\d\d)(\d\d)(\d\d)/;
$Psecs = 3600*$Phr + 60*$Pmin + $Psec;
print "hr,min,sec,Phr,Pmin,Psec: $hr,$min,$sec,$Phr,$Pmin,$Psec\n" if $DEBUG;
$diff = abs($Psecs - $secs);
print "diff: $diff\n" if $DEBUG;
# end our search if we happened upon different dates
$diff = 99999 if $Pdate ne $date;
}
} else {
$iP = $iPO = 2;
}
$priorPic = $jpegs[$t-$iP];
$Pic = $jpegs[$t];
$postPic = $jpegs[$t+$iPO];
print RAN qq($priorPic
$Pic
$postPic
);
}
close(RAN);
Bunch of simple python scripts
I call this one getinfo.py:
#!/usr/bin/python3
import os,sys
from PIL import Image
from PIL.ExifTags import TAGS
for (tag,value) in Image.open(sys.argv[1])._getexif().items():
print (‘%s = %s’ % (TAGS.get(tag), value))
print (‘%s = %s’ % (TAGS.get(tag), value))
And here’s rotate.py:
#!/usr/bin/python3
import PIL, os
import sys
from PIL import Image
picture= Image.open(sys.argv[1])
# if orientation is 6, rotate clockwise 90 degrees
picture.rotate(-90,expand=True).save(“rot_” + sys.argv[1])
While here is rotatecc.py:
#!/usr/bin/python3
import PIL, os
import sys
from PIL import Image
picture= Image.open(sys.argv[1])
# if orientation is 8, rotate counterclockwise 90 degrees
picture.rotate(90,expand=True).save(“rot_” + sys.argv[1])
And rotate-as-needed.sh:
#!/bin/sh
# DrJ 12/2020
# some of our downloaded files will be sideways, and fbi doesn’t auto-rotate them as far as I know
# assumption is that are current directory is the one where we want to alter files
ls -1|while read line; do
echo fileis “$line”
o=`~/getinfo.py “$line”|grep -i orientation|awk ‘{print $NF}’`
echo orientation is $o
if [ “$o” -eq “6” ]; then
echo “90 clockwise is needed, o is $o”
# rotate and move it
~/rotate.py “$line”
mv rot_”$line” “$line”
elif [ “$o” -eq “8” ]; then
echo “90 counterclock is needed, o is $o”
# rotate and move it
~/rotatecc.py “$line”
mv rot_”$line” “$line”
fi
don
And finally, m2.pl:
#!/usr/bin/perl
# show the pics ; rotate the screen as needed
# for now, assume the display is in a neutral
# orientation at the start
use Time::HiRes qw(usleep);
$DEBUG = 1;
$delay = 6; # seconds between pics
$mdelay = 200; # milliseconds
$mshow = "$ENV{HOME}/mediashow";
$pNames = "$ENV{HOME}/pNames";
# pics are here
$picsDir = "$ENV{HOME}/Pictures";
chdir($picsDir);
system("ls -1 > $pNames");
# forther massage names
open(TMP,"$pNames");
@lines = ;
foreach (@lines) {
chomp;
$filesNullSeparated .= $_ . "\0";
}
open(MS,">$mshow") || die "Cannot open mediashow file $mshow!!\n";
print MS $filesNullSeparated;
close(MS);
print "filesNullSeparated: $filesNullSeparated\n" if $DEBUG;
$cn = @lines;
print "$cn files\n" if $DEBUG;
# throw up a first picture - all black. Trick to make black bckgrd permanent
system("sudo fbi -a --noverbose -T 1 $ENV{HOME}/black.jpg");
system("sudo fbi -a --noverbose -T 1 $ENV{HOME}/black.jpg");
sleep(1);
system("sleep 2; sudo killall fbi");
# start infinitely looping fbi slideshow
for (;;) {
# then start slide show
# shell echo cannot work with null character so we need to use a file to store it
#system("cat $picNames|xargs -0 qiv -DfRsmi -d $delay \&");
system("sudo xargs -a $mshow -0 fbi -a --noverbose -1 -T 1 -t $delay ");
# fbi runs in background, then exits, so we need to monitor if it's still alive
# wait appropriate estimated amount of time, then look aggressively for fbi
sleep($delay*($cn - 2));
for(;;) {
open(MON,"ps -ef|grep fbi|grep -v grep|") || die "Cannot launch ps -ef!!\n";
$match = ;
if ($match) {
print "got fbi match\n" if $DEBUG > 1;
} else {
print "no fbi match\n" if $DEBUG;
# fbi not found
last;
}
close(MON);
print "usleeping, noexist is $noexit\n" if $DEBUG > 1;
usleep($mdelay);
} # end loop testing if fbi has exited
} # close of infinite loop
You’ll need to make these files executable. Something like this should work:
$ chmod +x *.py *.pl *.sh
My crontab file looks like this (you edit crontab using the crontab -e command):
@reboot sleep 25; cd ~ ; ./m2.pl >> ./m2.log 2>&1 24 16 * * * ./master.sh >> ./master.log 2>&1
This invokes master.sh once a day at 4:24 PM to refresh the 60 photos. My refresh took about 13 minutes the other day, but the old slideshow keeps playing until almost the last second, so it’s OK.
The nice thing about this approach is that fbi works with a lightweight OS – Raspbian Lite is fine, you’ll just need to install a few packages. My SD card is unstable or something, so I have to re-install the OS periodically. An install of Raspberry Pi Lite on my RPi 4 took 11 minutes. Anyway, fbi is installed via:
$ sudo apt-get install fbi
But if your RPi is freshly installed, you may first need to do a
$ sudo apt-get update && sudo apt-get upgrade
python image manipulation
The drawback of this approach, i.e., not using qiv, is that we gotta do some image manipulation, for which python is the best candidate. I’m going by memory. I believe I installed python3, perhaps as sudo apt-get install python3. Then I needed pip3: sudo apt-get install python3-pip. Then I needed to install Pillow using pip3: sudo pip3 install Pillow.
m2.pl refers to a black.jpg file. It’s not a disaster to not have that, but under some circumstances it may help. There it is!
Many of my photos do not have EXIF information, yet they can still be displayed. So for those photos running getinfo.py will produce an error (but the processing of the other photos will continue.)
I was originally rotating the display 90 degrees as needed to display the photos with the using the maximum amount of display real estate. But that all broke when I tried to revive it. And the cheap servo motor was noisy. But folks were pretty impressed when I demoed it, because I did it get it the point where it was indeed working correctly.
Picture selection methodology
There are 20 “folders” (random numbers) of three triplets each. The idea is to give you additional context to help jog your memory. The triplets, with some luck, will often be from the same time period.
I observed how many similar pictures are adjacent to each other amongst our total collection. To avoid identical pictures, I require the pictures to be five minutes apart in time. Well, I cheated. I don’t pull out the timestamp from the EXIF data as I should (at least not yet – future enhancement, perhaps). But I rely on a file-naming convention I notice is common – 20201227_134508.jpg, which basically is a timestamp-encoded name. The last six digits are HHMMSS in case it isn’t clear.
Rclone
You must install the rclone package, sudo apt-get install rclone.
Can you configure rclone on a headless Raspberry Pi?
Indeed you can. I know because I just did it. You enable your Pi for ssh access. do the rclone-config (or whatever it’s called) using putty from a Windows 10 system. You’ll get a long Google URL in the course of configuring that you can paste into your browser. You verify it’s you, log into your Google account. Then you get back a url like http://127.0.0.1:5462/another-long-url-string. Well, put that url into your clipboard and in another login window, enter curl clipboard_contents
That’s what I did, not certain it would work, but I saw it go through in my rclone-config window, and that was that!
Don’t want to deal with rclone?
So you want to use a traditional flash drive you plug in to a USB port, just like you have for the commerical photo frames, but you otherwise like my approach of randomizing the picture selection each day? I’m sure that is possible. A mid-level linux person could rip out the rclone stuff I have embedded and replace as needed with filesystem commands. I’m imagining a colossal flash drive with all your tens of thousands of pictures on it where my random selection still adds value. If this post becomes popular enough perhapsI will post exactly how to do it.
Getting started with this
After you’ve done all that, and want to try it out. you can run
$ ./master.sh
First you should see a file called files growing in size – that’s rclone doing its listing. That takes a few minutes. Then it generates random numbers for photo selection – that’s very fast, maybe a second. Then it slowly copies over the selected images to a temporary folder called Picturestmp. That’s the slowest part. If you do a directory listing you should see the number of images in that directory growing slowly, adding maybe three per minute until it reaches 60 of them. Finally the rotation are applied. But even if you didn’t set up your python environment correctly, it doesn’t crash. It effectively skips the rotations. A rotation takes a couple seconds per image. Finally all the images are copied over to the production area, the directory called Pictures; the old slideshow program is “killed,” and the new slideshow starts up. Whole process takes around 15 minutes.
I highly recommend running master.sh by hand as just described to make sure it all works. Probably some of it won’t. I don’t specialize in making recipes, more just guidance. But if you’re feeling really bold you can just power it up and wait a day (because initially you won’t have any pictures in your slideshow) and pray that it all works.
Still missing
I’d like to display a transition image when switching from the current set of photos to the new ones.
Suppressing boot up messages might be nice for some. Personally I think they’re kind of cool – makes it look like you’ve done a lot more techie work than you actually have!
You’re going to get some junk images. I’ve seen where an image is a thumbnail (I guess) and gets blown up full screen so that you see these giant blocks of pixels. I could perhaps magnify those kind of images less.
Movies are going to be tricky so let’s not even go there…
I was thinking about making it a navigation-enabled photo frame, such as integration with a Gameboy controller. You could do some really awesome stuff: Pause this picture; display the location (town or city) where this photo was taken; refresh the slideshow. It sounds fantastical, but I don’t think it’s beyond the capability of even modestly capable hobbyist programmers such as myself.
I may still spin the frame 90 degrees this way an that. I have the servo mounted and ready. Just got to revive the control commands for it.
References and related
This 7″ display is a little small, but it’s great to get you started. It’s $64 at Amazon: Amazon.com: Raspberry Pi 7″ Touch Screen Display: Electronics
I have an older approach using qiv which I lost the files for, and my blog post got corrupted. Hence this new approach.
My advanced slideshow treatment is beginning to take shape. I just add to it while I develop it, so check it periodically if that is of interest. Raspberry Pi advanced photo frame.
|
So I ran my summarizer yesterday and it took literally all day to run only 200 products through the lex sum function. So I went through my code and added a timer for each major step in the process like so:
start = time.time()
asin_list = get_asins(limit)
end = time.time()
print('Get ASINs: ', end - start)
Turns out it was taking over 60 seconds per query . I did the math and at the rate it was going, it would take almost two years to complete every product in my database. So I started looking around at different ways to group large databases. Turns out databases are a lot more complicated than I believed. It felt like looking for a PHP solution back in high school when I didn't know enough to know what to look for. Finally I stumbled upon a feature called Indexing. First I added the indexing code inside of my script, which had no effect, but it seemed like it had worked properly. Still though I was not going to give up that easy and I decided to open up postgres directly in the terminal and poke around to see if the indexing was applied properly. Turns out that it was not applied at all. Here is the code I used to index the asin table in reviews:
# Remote Connect
postgres psql -U ryan -h 162.196.142.159 -p 5432 databasename
# Display table Indexes
SELECT * FROM pg_indexes WHERE tablename = 'reviews';
# Create Index
CREATE INDEX asin_index ON reviews (asin);
Ureka! It worked, now the script that took all day to run yesterday ran in about a minute flat! That is the biggest difference in performance time I've ever experienced and I cant wait to see where else indexing will help my databases.
Other than that, Erin showed me a bunch of stuff in illustrator and Phototshop.
ctrl+click with select tool enables auto-select
ctrl+d — deselect
ctrl+shift+i — invert selection
ctrl+j — duplicate layer
ctrl+alt+j — duplicate and name layer
|
I have seen few py scripts which use this at the top of the script. In what cases one should use it?
import sys
reload(sys)
sys.setdefaultencoding("utf-8")
As per the documentation: This allows you to switch from the default ASCII to other encodings such as UTF-8, which the Python runtime will use whenever it has to decode a string buffer to unicode.
This function is only available at Python start-up time, when Python scans the environment. It has to be called in a system-wide module, sitecustomize.py, After this module has been evaluated, the setdefaultencoding() function is removed from the sys module.
The only way to actually use it is with a reload hack that brings the attribute back.
Also, the use of sys.setdefaultencoding() has always been discouraged, and it has become a no-op in py3k. The encoding of py3k is hard-wired to "utf-8" and changing it raises an error.
I suggest some pointers for reading:
The answer is NEVER! (unless you really know what you're doing)
9/10 times the solution can be resolved with a proper understanding of encoding/decoding.
1/10 people have an incorrectly defined locale or environment and need to set:
PYTHONIOENCODING="UTF-8"
in their environment to fix console printing problems.
(struck through to avoid re-use) changes the default encoding/decoding used whenever Python 2.x needs to convert a Unicode() to a str() (and vice-versa) and the encoding is not given. I.e:sys.setdefaultencoding("utf-8")
str(u"\u20AC")
unicode("€")
"{}".format(u"\u20AC")
In Python 2.x, the default encoding is set to ASCII and the above examples will fail with:
UnicodeDecodeError: 'ascii' codec can't decode byte 0xe2 in position 0: ordinal not in range(128)
(My console is configured as UTF-8, so "€" = '\xe2\x82\xac', hence exception on \xe2)
or
UnicodeEncodeError: 'ascii' codec can't encode character u'\u20ac' in position 0: ordinal not in range(128)
will allow these to work for sys.setdefaultencoding("utf-8")me, but won't necessarily work for people who don't use UTF-8. The default of ASCII ensures that assumptions of encoding are not baked into code
also has a side effect of appearing to fix sys.setdefaultencoding("utf-8")sys.stdout.encoding, used when printing characters to the console. Python uses the user's locale (Linux/OS X/Un*x) or codepage (Windows) to set this. Occasionally, a user's locale is broken and just requires PYTHONIOENCODING to fix the console encoding.
Example:
$ export LANG=en_GB.gibberish
$ python
>>> import sys
>>> sys.stdout.encoding
'ANSI_X3.4-1968'
>>> print u"\u20AC"
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
UnicodeEncodeError: 'ascii' codec can't encode character u'\u20ac' in position 0: ordinal not in range(128)
>>> exit()
$ PYTHONIOENCODING=UTF-8 python
>>> import sys
>>> sys.stdout.encoding
'UTF-8'
>>> print u"\u20AC"
€
People have been developing against Python 2.x for 16 years on the understanding that the default encoding is ASCII. UnicodeError exception handling methods have been written to handle string to Unicode conversions on strings that are found to contain non-ASCII.
def welcome_message(byte_string):
try:
return u"%s runs your business" % byte_string
except UnicodeError:
return u"%s runs your business" % unicode(byte_string,
encoding=detect_encoding(byte_string))
print(welcome_message(u"Angstrom (Å®)".encode("latin-1"))
Previous to setting defaultencoding this code would be unable to decode the “Å” in the ascii encoding and then would enter the exception handler to guess the encoding and properly turn it into unicode. Printing: Angstrom (Å®) runs your business. Once you’ve set the defaultencoding to utf-8 the code will find that the byte_string can be interpreted as utf-8 and so it will mangle the data and return this instead: Angstrom (Ů) runs your business.
Changing what should be a constant will have dramatic effects on modules you depend upon. It's better to just fix the data coming in and out of your code.
While the setting of defaultencoding to UTF-8 isn't the root cause in the following example, it shows how problems are masked and how, when the input encoding changes, the code breaks in an unobvious way: UnicodeDecodeError: 'utf8' codec can't decode byte 0x80 in position 3131: invalid start byte
|
при проверке, что шина сообщений будет вызывать функции, обозначенные в одной из ее глобальных переменных
< Сильный > messagebus.py
from . import handlers
class MessageBus:
...
def handle_event(self, event: events.Event):
handlers_ = EVENT_HANDLERS.get(type(event))
...
EVENT_HANDLERS = {
events.BatchConfirmed: [handlers.send_remittance_to_connect, handlers.send_payment_information_to_vendor],
}
< Сильный > tests_handlers.py
from . import messagebus
def test_payment_batch_confirm_calls_expected_handlers(mocker): # using pytest
# GIVEN mocked event handlers to isolate tests
send_remittance_to_connect_mock = \
mocker.patch('src.disbursement.messagebus.handlers.send_remittance_to_connect')
send_payment_information_to_vendor_mock = \
mocker.patch('src.disbursement.handlers.send_payment_information_to_vendor')
create_import_je_mock = mocker.patch('src.disbursement.handlers.create_import_je')
# GIVEN a payment batch with a confirmation
bus = messagebus.MessageBus()
# WHEN invoking the cli to add the confirmation number
bus.handle(commands.ConfirmPaymentBatch(
reference=batch.reference, confirmation='PR-234848-493333333',
payment_date=pd.Timestamp(year=2019, month=11, day=23)
))
# THEN the expected handlers are called
assert send_remittance_to_connect_mock.called # this fails even though it is in the dictionary's list
assert send_payment_information_to_vendor_mock.called
assert create_import_je_mock.called
Тест выше не удается с этой ошибкой.
AssertionError: assert False
where False = <MagicMock name='send_remittance_to_connect' id='140228606012384'>.called
Я ожидал бы, что он потерпит неудачу при последнем утверждении, потому что исправленная функция обработчика находится в словаре EVENT_HANDLERS. Принимая отладчик к нему. patch работает (нижняя фиолетовая рамка). При взгляде на функцию глобальной переменной. Тем не менее, он не заполнен в словаре messagebus классов EVENT_HANDLER. Я думаю, что проблема в том, что оператор импорта в тестовом файле загружает глобальное значение словаря EVENT_HANDLERS перед тем, как функции будут смоделированы. Таким образом, они не заменяются, когда создается экземпляр сообщения. Кроме того, это единственный тест, который проводится, поэтому никаких других тестов нет.
Я смотрел, как обезьяна исправляет словарь EVENT_HANDLERS, но тест проверяет, будут ли эти обработчики, называемые исправлением обезьяны, из того, что я могу сказать, победить цель теста. Как я могу издеваться над этими функциями или как еще можно это проверить?
Новые вопросы
python
Python - это многопарадигмальный, динамически типизированный, многоцелевой язык программирования. Он разработан для быстрого изучения, понимания и использования, а также для обеспечения чистого и единообразного синтаксиса. Обратите внимание, что Python 2 официально не поддерживается с 01.01.2020. Тем не менее, для вопросов о Python, связанных с версией, добавьте тег [python-2.7] или [python-3.x]. При использовании варианта Python (например, Jython, PyPy) или библиотеки (например, Pandas и NumPy) включите его в теги.
|
I’m trying to do a script to import a number of STLs, and for each one imported to name it as the filename.
browsed = rs.BrowseForFolder(title = "Auto-name Imported Files - Location Picker")
files = rs.OpenFileNames(title = "Auto-name Imported Files - File Picker", folder=browsed)
for filename_iteration in files:
rs.Command('_-Import ' + filename_iteration + ' _Enter')
object_guid = rs.GetObject(preselect=True)
rs.ObjectName(object_guid, filename_iteration)
rs.Command ('_SelNone ')
The only thing so far thrwing it off is if a filename has a space in it e.g. filepath… 519430376-4 WO.stl
My guess is I need to ‘re-stringify’ each full filepath and loop those instead. So far I am just using the Rhino behaviour of auto selecting something imported to get the imported GUID and then rename it.
For a file called: 519430376-4 WO.stl, the below is reported:
C:\Users\j.hutchinson\Downloads\wetransfer-1ba4b3\519430376-4.3dm not found, unable to open
So after the space it’s assuming a 3dm.
|
Hello!
I used to draw a rectangle of a certain color and thickness in PDF Viewer using annotations (with the JavaScript function addAnnot()).
Could I simply draw a rectangle with any function of the PDF-Tools Library or should I also create an Annotation with the PXCp_Add3DAnnotationW() function? The problem is I'm trying to use only the PDF-Tools in order to manipulate a PDF-Document.
Thanks for any answers!
I used to draw a rectangle of a certain color and thickness in PDF Viewer using annotations (with the JavaScript function addAnnot()).
Could I simply draw a rectangle with any function of the PDF-Tools Library or should I also create an Annotation with the PXCp_Add3DAnnotationW() function? The problem is I'm trying to use only the PDF-Tools in order to manipulate a PDF-Document.
Thanks for any answers!
Hi!
I have some questions on the PXCp_AddLineAnnotationW function:
1. The last parameter of the function is a pointer to a PXC_CommonAnnotInfo structure. It expects among other things an integer value for a color. Here is an extract from the docs with a pseudo code: I couldn't find any equivalent of that function in WPF. How is that value calculated?
This is an extract from my code:I didn't define the values for AnnotInfo.m_Border.m_DashArray. No annotation is created. I tested it with JavaScript command this.getAnnots(0); It returns null.
Thanks!
I have some questions on the PXCp_AddLineAnnotationW function:
1. The last parameter of the function is a pointer to a PXC_CommonAnnotInfo structure. It expects among other things an integer value for a color. Here is an extract from the docs with a pseudo code:
Code: Select all
AnnotInfo.m_Color = RGB(200, 0, 100);
2. A comprehension question: should I draw four single lines in order to create a rectangle or can I directly draw a rectangle with that function (with the parameter LPCPXC_RectF rect that specifies the bounding rectangle of the annotation)?RGB(255, 255, 255) = 16777215 ???
This is an extract from my code:
Code: Select all
var borderRect = new PdfXchangePro.PXC_RectF { left = selection.Left,
right = selection.Right,
top = selection.Top,
bottom = selection.Bottom };
int color = 16777215; // RGB(255, 255, 255) ???
var border = new PdfXchangePro.PXC_AnnotBorder { m_Width = StrToDouble(BorderThickness),
m_Type = PdfXchangePro.PXC_AnnotBorderStyle.ABS_Solid };
var borderInfo = new PdfXchangePro.PXC_CommonAnnotInfo{ m_Color = color,
m_Flags = Convert.ToInt32(PdfXchangePro.PXC_AnnotsFlags.AF_ReadOnly),
m_Opacity = _opacity,
m_Border = border };
var startPoint = new PdfXchangePro.PXC_PointF {x = selection.Left, y = selection.Top};
var endPoint = new PdfXchangePro.PXC_PointF {x = selection.Right, y = selection.Bottom};
int retval = PdfXchangePro.PXCp_AddLineAnnotationW(_handle,
0,
ref borderRect,
"xy",
"yx",
ref startPoint,
ref endPoint,
PdfXchangePro.PXC_LineAnnotsType.LAType_None,
PdfXchangePro.PXC_LineAnnotsType.LAType_None,
color,
ref borderInfo); // function returns 0
Thanks!
Site Admin
Posts:3632
Joined:Thu Jul 08, 2004 10:36 pm
Location:Vancouver Island - Canada
Contact:
Can you send me PDF generated by your code ?
P.S. RGB is 'macro' is equivalent to the following function
P.S. RGB is 'macro' is equivalent to the following function
Code: Select all
// r, g, and b in range from 0 to 255
ULONG _RGB(int r, int g, int b)
{
return (ULONG)(r + g * 256 + b * 65536);
}
Tracker Software (Project Director)
When attaching files to any message - please ensure they are archived and posted as a .ZIP, .RAR or .7z format - or they will not be posted - thanks.
When attaching files to any message - please ensure they are archived and posted as a .ZIP, .RAR or .7z format - or they will not be posted - thanks.
I've got it! I had to close the document in the PDF viewer before creating a line annotation with PDF Tools Library function PXCp_AddLineAnnotationW!
I still don't understand whether I can create a rectangle as one annotation or I have to construct it generating 4 single lines? The third parameter (LPCPXC_RectF rect) in this function is according to the documentation a bounding rectangle of the
annotation. What is the use of it?
One more question. Is it possible to suppress the pop-up annotation dialog that appears after a double-click on the annotation? Are there any parameter in the PXCp_AddLineAnnotationW function to manage it. I've only found the flag PXC_AnnotsFlags.AF_ReadOnly of the PXC_CommonAnnotInfo class, that makes the line annotation (or the annotation bounding rectangle?) read-only.
I still don't understand whether I can create a rectangle as one annotation or I have to construct it generating 4 single lines? The third parameter (LPCPXC_RectF rect) in this function is according to the documentation a bounding rectangle of the
annotation. What is the use of it?
One more question. Is it possible to suppress the pop-up annotation dialog that appears after a double-click on the annotation? Are there any parameter in the PXCp_AddLineAnnotationW function to manage it. I've only found the flag PXC_AnnotsFlags.AF_ReadOnly of the PXC_CommonAnnotInfo class, that makes the line annotation (or the annotation bounding rectangle?) read-only.
Tracker Supp-Stefan
Site Admin
Posts:14208
Joined:Mon Jan 12, 2009 8:07 am
Location:London
Contact:
Hi Relapse,
It is definitely possible with the low level functions, but unfortunately the only High Level ones are for adding annotations - not for deleting them.
Best,
Stefan
It is definitely possible with the low level functions, but unfortunately the only High Level ones are for adding annotations - not for deleting them.
Best,
Stefan
Tracker Supp-Stefan
Site Admin
Posts:14208
Joined:Mon Jan 12, 2009 8:07 am
Location:London
Contact:
Hello relapse,
As mentioned before - there are no high level functions that will allow you to delete annotations.
So you will need to read on annotations in the PDF Reference:
http://wwwimages.adobe.com/www.adobe.co ... ce_1-7.pdf
section 8.4 Annotations in the above document
or section 12.4 Annotations in the ISO version of the file:
http://wwwimages.adobe.com/www.adobe.co ... 0_2008.pdf
And then utilize the low level functions described in
Alternatively - you could use JS while you have the files opened in the Viewer AX. This should be quite a lot easier to implement, and will still allow you to create/edit/delete annotations as needed.
Best,
Stefan
Tracker
As mentioned before - there are no high level functions that will allow you to delete annotations.
So you will need to read on annotations in the PDF Reference:
http://wwwimages.adobe.com/www.adobe.co ... ce_1-7.pdf
section 8.4 Annotations in the above document
or section 12.4 Annotations in the ISO version of the file:
http://wwwimages.adobe.com/www.adobe.co ... 0_2008.pdf
And then utilize the low level functions described in
3.2.5 PDF Dictionary Functionsof our PDF Tools SDK manual to read and manipulate the annotations dictionary as neeed.
Alternatively - you could use JS while you have the files opened in the Viewer AX. This should be quite a lot easier to implement, and will still allow you to create/edit/delete annotations as needed.
Best,
Stefan
Tracker
Tracker Supp-Stefan
Site Admin
Posts:14208
Joined:Mon Jan 12, 2009 8:07 am
Location:London
Contact:
Hi Relapse,
There are some snippets inside the manual, but there isn't anything more complex - as those are low level functions giving you access to the very structure of the PDF File and the way you would like to use such methods will greatly vary from case to case. You will need to get yourself acquainted with the PDF specification to be able to use those successfully.
Best,
Stefan
There are some snippets inside the manual, but there isn't anything more complex - as those are low level functions giving you access to the very structure of the PDF File and the way you would like to use such methods will greatly vary from case to case. You will need to get yourself acquainted with the PDF specification to be able to use those successfully.
Best,
Stefan
I do read the pdf specification
I cannot understand how it is possible to access the Annotations-dictionary of a ceratin page.
I've found the function PXCp_ObjectGetDictionary, but it needs an object handle. Where can I get it?
I cannot understand how it is possible to access the Annotations-dictionary of a ceratin page.
I've found the function PXCp_ObjectGetDictionary, but it needs an object handle. Where can I get it?
Tracker Supp-Stefan
Site Admin
Posts:14208
Joined:Mon Jan 12, 2009 8:07 am
Location:London
Contact:
Hi relapse,
You might want to use functions like PXCp_llGetObjectByIndex to obtain an object first, and then here is the sample from the manual for using the PXCp_ObjectGetDictionary function:Best,
Stefan
You might want to use functions like PXCp_llGetObjectByIndex to obtain an object first, and then here is the sample from the manual for using the PXCp_ObjectGetDictionary function:
Code: Select all
// Retrieve object's dictionary
HPDFOBJECT hObject;
...
HPDFDICTIONARY hDict;
hr = PXCp_ObjectGetDictionary(hObject, &hDict);
if (IS_DS_FAILED(hr))
{
// report error
...
}
Stefan
I try to use the PXC_Rect function in order to draw a real rectangle and not an annotation.
What is this identifier for the page content and how can I get it?
Thanks!
HRESULT PXC_Rect(
_PXCContent* content,
double left,
double top,
double right,
double bottom
);
Parameters
content [in] Parameter content specifies the identifier for the page content to which the function will be applied.
_PXCContent* content,
double left,
double top,
double right,
double bottom
);
Parameters
content [in] Parameter content specifies the identifier for the page content to which the function will be applied.
What is this identifier for the page content and how can I get it?
Thanks!
Tracker Supp-Stefan
Site Admin
Posts:14208
Joined:Mon Jan 12, 2009 8:07 am
Location:London
Contact:
Hi Relapse,
This method is from the PXCLIB40 set of functions - those are aimed at creating new PDF document from scratch - so you can not use that to just add a rectangle to an already existing page I am afraid.
Otherwise - you can see how the content identifier is to be set up in the sample projects in
C:\Program Files\Tracker Software\PDF-XChange PRO 4 SDK\Examples\SDKExamples\<<YOUR Programming language>>\PDFXCDemo
Best,
Stefan
This method is from the PXCLIB40 set of functions - those are aimed at creating new PDF document from scratch - so you can not use that to just add a rectangle to an already existing page I am afraid.
Otherwise - you can see how the content identifier is to be set up in the sample projects in
C:\Program Files\Tracker Software\PDF-XChange PRO 4 SDK\Examples\SDKExamples\<<YOUR Programming language>>\PDFXCDemo
Best,
Stefan
Thanks, Stefan, your patience is honorable.
Is there any difference between
HRESULT PXCp_Init(PDFDocument* pObject, LPCSTR Key, LPCSTR DevCode);
and
HRESULT PXC_NewDocument(_PXCDocument** pdf, LPCSTR key, LPCSTR devCode);
? Are the both parameters
I've tried to mix the use of the both libraries:but I've got an AccessViolationException executing the PXC_GetPage function.
I've also found no function to delete a new created (with PXC_Rect) graphical object or is it not possible at all?
Is there any difference between
HRESULT PXCp_Init(PDFDocument* pObject, LPCSTR Key, LPCSTR DevCode);
and
HRESULT PXC_NewDocument(_PXCDocument** pdf, LPCSTR key, LPCSTR devCode);
? Are the both parameters
andPDFDocument* pObjectidentical?_PXCDocument** pdf
I've tried to mix the use of the both libraries:
Code: Select all
int pageContentIdentifier;
int pdfHandle;
int pdfPage = 0;
PdfXchangePro.PXCp_Init(out pdfHandle, PdfXchangePro.SerialNumber, PdfXchangePro.DevelopmentCode);
PdfXchangePro.PXCp_ReadDocumentW(pdfHandle, _tempFile, 0);
PdfXchange.PXC_GetPage(pdfHandle, pdfPage, out pageContentIdentifier);
PdfXchange.PXC_Rect(pdfHandle, 20, 100, 100, 20);
I've also found no function to delete a new created (with PXC_Rect) graphical object or is it not possible at all?
Tracker Supp-Stefan
Site Admin
Posts:14208
Joined:Mon Jan 12, 2009 8:07 am
Location:London
Contact:
Hi Relapse,
I am afraid you can't mix methods from the two libraries. You will need to create and save a PDF file using the PXC_ methods, and then open it and modify it using the PCXp_ones.
As PCX_ methods are designed for building up PDF files - there are no delete methods - as you are creating a pdf file or page starting from an empty one and only adding the components you want.
Best,
Stefan
I am afraid you can't mix methods from the two libraries. You will need to create and save a PDF file using the PXC_ methods, and then open it and modify it using the PCXp_ones.
As PCX_ methods are designed for building up PDF files - there are no delete methods - as you are creating a pdf file or page starting from an empty one and only adding the components you want.
Best,
Stefan
Yesterday I managed to draw a rectangle I needed but the restriction is - it must be a new pdf document. It's a pity!
Now I'm trying to delete line annotations directly in dictionaries. By the way I can create a line annotation of any thickness with the function PXCp_AddLineAnnotationW, there is no such a limit of 20 points as in JS. But I miss very much any examples for handling the dictionaries. I've found an example in the forum http://www.tracker-software.com/forum3/ ... nnotationW but it's in C++ and I'm fighting with translation of the of the low-level functions' declarations into C#.
Now I'm trying to delete line annotations directly in dictionaries. By the way I can create a line annotation of any thickness with the function PXCp_AddLineAnnotationW, there is no such a limit of 20 points as in JS. But I miss very much any examples for handling the dictionaries. I've found an example in the forum http://www.tracker-software.com/forum3/ ... nnotationW but it's in C++ and I'm fighting with translation of the of the low-level functions' declarations into C#.
Tracker Supp-Stefan
Site Admin
Posts:14208
Joined:Mon Jan 12, 2009 8:07 am
Location:London
Contact:
Hello Relapse,
Glad to hear that you got it working. And great to hear there are no width limitations with the PXCp_AddLineAnnotationW method.
As for samples for handling dictionaries - I am afraid that I can't help - any any samples would probably be in the PDF Specification itself.
Best,
Stefan
Glad to hear that you got it working. And great to hear there are no width limitations with the PXCp_AddLineAnnotationW method.
As for samples for handling dictionaries - I am afraid that I can't help - any any samples would probably be in the PDF Specification itself.
Best,
Stefan
The best advice here is to look at the C# wrappers for other projects. It is important to use the proper marshalling for types like BSTR and LPWSTR (from C# "string" types). If you look at function declarations for DLL imports in C# you'll often see a function argument prefixed by something like:relapse wrote:Yesterday I managed to draw a rectangle I needed but the restriction is - it must be a new pdf document. It's a pity!
Now I'm trying to delete line annotations directly in dictionaries. By the way I can create a line annotation of any thickness with the function PXCp_AddLineAnnotationW, there is no such a limit of 20 points as in JS. But I miss very much any examples for handling the dictionaries. I've found an example in the forum http://www.tracker-software.com/forum3/ ... nnotationW but it's in C++ and I'm fighting with translation of the of the low-level functions' declarations into C#.
Code: Select all
[MarshalAs(UnmanagedType.LPWStr)]
Code: Select all
sometype somefunction([MarshalAs(UnmanagedType.LPWStr)] string InputLPWSTR);
UnmanagedType has a lot of members (LPWStr, BStr, etc) that you can specify for different scenarios. Check MSDN for details or use autocomplete in Visual Studio to see a list.
Also note the use of "ref" and "out" keywords that are used when the API function takes a pointer. "ref" means C# will check to see if the value is initialized; "out" means it may be uninitialized and is expected to be set by the function.
Code: Select all
E.g. C++:
HRESULT calculate_property_of_mystruct(mystruct* input, int* output);
would be imported into C# with:
... calculate_property_of_mystruct(ref mystruct input, out int output);
Lots of reading here:
http://msdn.microsoft.com/en-us/library/26thfadc.aspx
http://msdn.microsoft.com/en-us/library/fzhhdwae.aspx
|
I develop software for Python and Android.
Yes, you read that right. If you have been coding for a while and if Python is not the first programming language that you started with, then you definitely know what a Switch statement is, and appreciate how flawless it is, when you need to factor in multiple conditions/cases for a control flow.
But if you are just starting out or are using Python as the first language, there is a chance that you don't know about the Switch statement and don't know how smooth it is to code a control flow statement using it.
This is how a switch statement looks like in Java,
switch (variable/expression) {
case value1:
// statements of case1
break;
case value2:
// statements of case2
break;
.. .. ...
.. .. ...
default:
// default statements
}
You use the parenthesis to pass a variable into the switch statement and you define cases, which are just different conditions, the variable that you pass to the switch statement is compared with all of these cases and depending on which case condition is satisfied, that case gets executed.
The default block is what is executed when the variable doesn't satisfy any of the given case conditions.
Yeah, I know what you might be thinking by now. What's the big deal you could just use the
if-elif-else
statement instead?
Sure, you could but when dealing with multiple conditions the switch statement is more convenient and feels a lot cleaner.
So, now to the main part, How to get the switch statement in Python ?
Well to be honest, there is no switch statement in Python. But wait, dont click the close button just yet. I have a workaround to this problem.
So here it is, the Pythonic answer to the switch statement. Python's dictionary object can be used as a switch statement and the implementation is very easy and feels intuitive to use.
I will show you how to write a switch statement using Python with an example.
Suppose you want to write a function
month_tracker()
, that takes in the current month as an integer and returns the name of the month.
If you use
if-elif-else
statement this is how your code would look like;
def month_tracker(month):
if month == 1:
month_name = 'January'
elif month == 2:
month_name = 'February'
elif month == 3:
month_name = 'March'
elif month == 4:
month_name = 'April'
elif month == 5:
month_name = 'May'
elif month == 6:
month_name = 'June'
elif month == 7:
month_name = 'July'
elif month == 8:
month_name = 'August'
elif month == 9:
month_name = 'September'
elif month == 10:
month_name = 'October'
elif month == 11:
month_name = 'November'
elif month == 12:
month_name = 'December'
else :
month_name = 'January'
return month_name
And if we did it the switch statement-esque way,
def month_tracker(month):
switch = {
1 : 'January',
2 : 'February',
3 :'March',
4 :'April',
5 :'May',
6 :'June',
7 :'July',
8 :'August',
9 :'September',
10 :'October',
11 :'November',
12 :'December',
}
return switch.get(month,1)
See, what I was talking about, see how good and clean the second approach looks.
How easy is it to write and structure the code using this approach ?
And the added benefit is, it looks COOL!
Go ahead and try it out on your next program.
Create your free account to unlock your custom reading experience.
|
OED tools: Pushover
The problem
In my last post about Linux at command I talked about notifications on my mobile.
In most of my automation scripts I prefer notifications to come to my mobile instead of via email or SMS (really? in 2015?) because:
it is always with me
I check it thousands times a day (I know you too ;-) )
it is a preferred channel - a specific app just for that
The automation
There are many notification services available today for free or minimal cost. When I needed it, after evaluating a few solutions, I chose Pushover .
Main advantages of Pushover:
no monthly fees
7500 messages per month - much more than I needed so far
API available for most programming languages
client available for many platforms
For bash scripts I call a Python script sendpush.py:
# !/usr/bin/python
''' last change: 20151203 notify to pushover '''
import commands import httplib, urllib import sys
def sendPush(messageText):
conn = httplib.HTTPSConnection("api.pushover.net:443")
conn.request("POST", "/1/messages.json",
urllib.urlencode({
"token": "REPLACE_WITH_YOUR_TOKEN",
"user": "REPLACE_WITH_YOUR_USER",
"message": messageText,
}), { "Content-type": "application/x-www-form-urlencoded" })
conn.getresponse()
def main():
sendPush(sys.argv[1])
main()
Usually I set a notification when the script starts, ends and a final one with a check of the expected output.
I had positive experiences with Pushover so far, messages always arrived on time and never got lost in the path so I’m not actively looking for a replacement (not very Kaizen …) but suggestions are welcome.
|
This blog post is part 5 of the Machinetalk explained series describing the concepts and ideas behind the Machinetalk middleware.
In this section, I describe the part of Machinetalk which probably needs most explanation. What I'm talking about is Machinetalk GSL - language bindings for Machinetalk using code generation.
But before we delve deep into code generation and meta-programming I recommend you to read the previous parts of Machinetalk explained:
A Case for Model-Driven Software Development
Before I start to explain MOP (Model Oriented Programming) it is important that you understand the reasons behind choosing this development approach.
I began to work on Machinetalk when I started the QtQuickVcp project. The project is a combination of UI components and Machinetalk bindings written in Qt/C++. It turned out that coding the language bindings rather simple using ZeroMQ and Protobuf. However, the hardest part was to look up the Machinetalk API in the Machinekit code sources and to implement the protocol flow correctly.
In the Fall last year I began to work on the Machinetalk bindings for Python pymachinetalk. For implementing the Python language bindings, I had to do almost the same things as for the Qt/C++ language bindings. Furthermore, I noticed that I also had to implement almost the same things for every new Machinetalk service I added to Machinekit.
As a programmer, I know that whenever you need to do a repetitive task it is time for abstraction. Your abstraction can be a function, an object or a module. But, we are usually only taught how to abstract in a particular programming language.
However, this particular problem is only partially solvable using language dependent means of abstraction. For example, we could introduce abstract Machinetalk service classes in C++ to make the task less repetitive. You can easily see that we need to do the same thing again in Python and any other programming language. The problem gets worse the fewer means of abstraction a particular programming language offers - think of a C Machinetalk language binding for example.
Other middleware solutions such as ROS have same the problem. Vendors provide a reference implementation for a particular programming language the rest is left to the community. However, living in a strongly heterogeneous world, we cannot accept middleware solutions that work only for a particular programming language
When studying the ZeroMQ reference manual - the zguide I came across the Code Generation section mentioning the GSL tool. It easily caught my attention since iMatix claims to use it to build protocols themselves.
Now that you have seen the problems that lead me to explore the MOP approach for Machinetalk it is time to explain Model-Oriented Programming.
Model-Oriented Programming
Model-Oriented Programming (MOP) is the application of model-driven development methods to programming. In comparison to traditional Model Driven Architecture (MDA) development approaches it does not depend on any general-purpose modeling language such as the Unified Modeling Language (UML).
Scientists found that model-centric software development approaches have not been widely adopted although they are in general considered good practice. A majority of the interviewed programmers claim that they do not think the generated output of code generators can be considered decent code.
Furthermore, general-purpose modeling languages are often found to be too generic. In MOP this problem is approached by developing not only domain-specific abstract models but also code generators for domain-specific modeling languages. Therefore, MOP can be applied to express concepts related to the problem domain. From these models not only code skeletons without a function can be generated but also full working software components.
MOP is most useful for projects that require repetitive coding. Moreover, models created in the process of MOP are technology and language independent and convertible to domain-specific and optimized source code. An advantage compared to general-purpose modeling languages is that code generators can be optimized to generate high quality and human-readable source code.
iMatix Generator Scripting Language
GSL by iMatix is an open source code construction tool and MOP language.
GSL uses simple XML documents without style sheets and namespaces as model files. Therefore, GSL is a Textual Modeling Language (TML) and shares all the benefits of text-based modeling.
You need no special software to edit GSL files. However, I found it most useful to use GNU Emacs to edit the gsl files. Put the editor into the major-mode for the corresponding language (e.g. python-mode) and enable the following minor mode by issuing gsl-mode.
(define-minor-mode gsl-mode
"Highlight two successive newlines."
:lighter " gsl"
(if gsl-mode
(highlight-regexp "\\(^\\..*\\)\n" 'hi-green-b)
(unhighlight-regexp "\\(^\\..*\\)\n"))
(if gsl-mode
(highlight-regexp "\\(\\$(.*?)\\)" 'hi-red-b)
(unhighlight-regexp "\\(\\$(.*?)\\)")))
I found the electric indent mode very annoying when editing GSL files. You can easily turn it off by issuing electric-indent-mode.
The GSL interpreter uses XML and GSL documents as input. It extracts data from the XML files and pushes it into a data tree.
The GSL interpreter interprets GSL documents in template or script mode depending on the selected mode.
If the interpreter is in script mode it interprets each line as GSL command except lines starting with a . symbol. All other lines are directly output to the specified output file.
In script mode, the interpreter does the exact opposite.
GSL Example
I personally prefer examples over long descriptions. Therefore, I created a simple GSL example for generating Python classes from an abstract model.
The model model.xml looks as follows:
<?xml version = "1.0" ?>
<module name = "foo">
<class name = "bar">
<property name = "foo bar"/>
<property name = "bar"/>
</class>
</module>
To generate code from the model we need a GSL template. The template pygen.gsl looks as follows:
.template 1
.output "$(module.name:c).py"
.for class
class $(class.Name)(object):
def __init__(self):
. for property
self._$(name:c) = None
. endfor
. for property
@property
def $(name:c)(self):
print('queried "$(name)"')
return self._$(name:c)
. endfor
.endfor
.endtemplate
As you see, without proper code highlighting the template becomes rather confusing. With the gsl-mode enabled the same code looks as follows:
When we execute the script with the following command gsl -script:pygen.gsl model.xml this results in the Python module foo.py:
class Bar(object):
def __init__(self):
self._foo_bar = None
self._bar = None
@property
def foo_bar(self):
print('queried "foo bar"')
return self._foo_bar
@property
def bar(self):
print('queried "bar"')
return self._bar
Of course, we wouldn't create a model for such a simple problem in real life. However, it demonstrates the capabilities and the simplicity of GSL very well. We also see that the approach becomes saner with increasing complexity of the model which would be in this case if we add more modules, classes, and properties.
According to a discussion on Reddit, GSL is a second-order meta-programming language. Second-order meta-programming means using this language one can build Domain Specific Languages, which is what we need for the Machinetalk code generator.
Modeling the Machinetalk Middleware
Now that we have seen the tools used for generating the Machinetalk language bindings it is time to explain the modeling approach.
The Machinetalk middleware design is split into sub-models to decrease the complexity of individual models and to separate the scope of each model.
Protocol modelscontain messages and their relation to the Machinetalk Protobuf container.
Component modelsare used to design behavior and interface of software components.
The GSL compiler converts the models into executable language bindings for multiple programming languages. The ProtoBuf compiler generates messages classes. The generated component classes use these message classes to serialize and deserialize messages.
Developers implementing new language bindings only need to develop a GSL template (a code-generator) for the target language and component classes containing language-specific details.
Model and Protocol Layering
The Machinetalk middleware separates the models into three layers.
The Channel layermodels the behavior of a single channel, such as, for example, the RPC or publish-subscribe channels.
The Composition layercomposes multiple channels to form a multi-channel protocol. This method allows combining the power of publish-subscribe and RPC in services.
As the name suggests the models do not cover the Implementation layer. This layer enables the implementation of language dependent presentation of the message data.
Protocol Model
The protocol model has two main functions. First, it defines and documents all messages related to the protocol used by a Machinetalk channel or component composed of multiple channels. Moreover, it also clearly defines the relation between the structure of Protobuf messages and Machinetalk messages.
Protobuf as API
Protobuf is a great serialization technology. Unfortunately, it lacks a few things to work as API description for Machinetalk services.
First, Protobuf itself provides an Interface Description Language (IDL) for describing messages. However, it does not include tools to describe the relation between messages.
Secondly, Machinetalk uses a single top-level container messages and sub-messages for each protocol. The reasons behind this decision have been described earlier. However, this leads to the problem that a single message description is not enough to describe the API of a Machinetalk.
Example
An example is worth a thousand words:
<data name="command">
<field name="ticket" requirement="MAY" />
<response name="emccmd executed" />
<response name="emccmd completed" />
<response name="error" />
</data>
<message name= "emc task plan run" inherit="command">
Run the task planner from the specified line number.
<field name="emc_command_params" message="EmcCommandParameters" requirement="MUST">
<field name="line number" requirement="MUST" />
</field>
<field name="interp_name" requirement="MUST" />
</message>
<system name="RPC">
Description of RPC components.
<include filename="rpc_client.xml" />
<include filename="rpc_service.xml" />
</system>
The model contains the description for all messages used in a system (combination of client and server / publisher and subscriber). Based on this model the code generator produces the protocol documentation.
Component Model
The component model describes the component state machines, channels, sockets, and timers.
The state machines are defined in the SCXML format, a W3C standard for defining state machines. My favorite editor for editing these charts Qt Creator (>= 4.2), but you can also find free and open source tools from other vendors to edit the files graphically. Alternatively, you can also use as a simple text editor to modify the XML source tree.
I don't want to go to much into details about SCXML. Instead please take a look the following statechart generated by the Machinetalk dot-file generator:
The transitions and actions in the statechart have special meaning. Events can be triggered by incoming and outgoing messages, timers, and triggers and socket state changes.
Actions send messages, start and stop channels, start, stop and reset timers and trigger custom slots.
<trigger name = "start"> <event name = "connect" when = "down"/></trigger><slot name="set connected" /><slot name="clear connected" />
Another core element of the middleware components is the timer. A typical use case of timers in middleware components is sending and verifying period heartbeat messages.
<timer
name = "heartbeat"
interval = "2500"
liveness = "2" >
For monitoring if the connection is alive.
<tick>
<event name = "heartbeat tick" when = "up" />
<event name = "heartbeat tick" when = "trying" />
</tick>
<timeout>
<event name = "heartbeat timeout" when = "up" />
<event name = "heartbeat timeout" when = "trying" />
</timeout>
</timer>
At the core of the component model are the socket and channel definition. If a socket definition refers to a class this means we are working on a composition layer component reusing a channel layer component.
In addition to the events triggered by state changes, each socket contains definitions for incoming and outgoing messages. The public attribute defines the visibility of the message interface in the resulting software class.
<socket name="command" class="RPC Client" module="Machinetal2
The command channel is used to issue commands to mklaunc3
<state name="trying">
<event name="command trying" when="up" />
</state>
<state name="up">
<event name="command up" when="trying" />
</state>
<outgoing name="emc task abort" public="true" />
<incoming name="*" />
<event name = "any msg sent" when = "up" />
<event name = "any msg sent" when = "trying" />
</incoming>
<incoming name="error" public="true">
<note />
</incoming>
</socket>
Please also note the use of the special tag note. This tag copies the content of a note message to the error string. I tried to avoid these implementation specific tags as much as possible.
Code Generators
Besides the models, the code generators are the second most important part of Machinetalk GSL.
The fundamental idea behind Machinetalk GSL is that for a new language binding one only needs to write a new code generator. The complexity of the code generator is far smaller than to write a complete language binding in any programming language.
For the core Machinetalk services I measured a code generation ratio (ratio between LOC of the code generator to the generated code) of 6 for Python and 10 for C++. This value increases with any additional Machinetalk service.
But before I talk too much about the benefits of code generation, let's take a look at how to implement a new language binding.
To implement a new Machinetalk binding you need to fulfill the following requirements:
FSM implementation: required for the component state machines
Concurrency: Machinetalk uses an asynchronous API. Therefore we need some of a concurrency support such as an event loop of multi-threading.
Timers: Timers are required for heartbeat messages.
Service Discovery: such as mDNS/DNS-SD
Implementation Process
To implement a new language binding using Machinetalk GSL, I recommend the following process:
First of all, research the minimum requirements in your target programming language and framework.
Next, create a small proof of concept implementation. This step will help you writing the code generator.
As the third step, generalize the proof of concept to implement the code generator. The existing implementations will help you.
When you completed the code generator, continue by implementing the implementation layer components using the newly generated language bindings.
Already implemented Code Generators
During the last year, I have continuously added code generators to Machinetalk GSL. Currently, the project contains code generators for the following programming languages, frameworks, and tools:
Qt/C++: used in QtQuickVcp
Python: for pymachinetalk, not yet integrated
Node.js: not used so far
JavaScript (Browser): used in WebVCP
Markdown + Graphviz Dot: used in Machinetalk-Doc
UPPAAL: used for formal verification of the middleware models
Conclusion
In this blog post, we have learned about code generators for the Machinetalk language bindings. We used the GSL language and tool to write the code generators and created XML models.
If you want to learn more about Machinetalk, GSL, and code generation I recommend you to take a look at the machinetalk-gsl GitHub repository.
Even if you are not going to work on Machinetalk GSL, I still can recommend taking a closer look at MOP to add it to your toolbox.
The end of this article also brings me to the end of the Machinetalk explained series. I hope you have enjoyed reading the articles and learned more about the Machinetalk middleware.
Please send me feedback, ideas, and recommendations.
Your
Machine Koder
|
0xUsernames
Есть так много людей, использующих службу обмена сообщениями, что им не хватает места для хранения всех имен пользователей! Чтобы исправить это, они начнут хранить имена пользователей как шестнадцатеричные, где это возможно.
Если имя пользователя состоит только из символов 0123456789ABCDEF(без учета регистра), оно может быть преобразовано в шестнадцатеричное и сохранено как целое число. Например, имя пользователя ba5eba11может быть интерпретировано как 0xBA5EBA11шестнадцатеричное целое число.
Но как насчет 05AB1E? Это ведущий ноль, который будет потерян. Таким образом, всякий раз, когда мы конвертируем имя пользователя, мы обязательно ставим перед ним 1перед чтением его как целое число.
Соревнование
Ваша задача - написать программу или функцию, которая, учитывая непустое имя пользователя в виде строки, «гекса-сжимает» имя пользователя:
Если его можно интерпретировать как шестнадцатеричное целое число, добавьте 1, интерпретируйте как шестнадцатеричное и затем выведите результат как основание 10.
В противном случае просто верните строку без изменений.
Это код-гольф , поэтому выигрывает самое короткое решение (в байтах)! Встроенные функции преобразования базы разрешены.
Тестовые случаи
Вы можете предположить, что любые результирующие целые числа находятся в пределах стандартного целочисленного диапазона вашего языка.
Как и имена пользователей в большинстве систем обмена сообщениями, входные строки будут содержать только буквенно-цифровые символы и подчеркивания.
Помните, что вам всегда нужно добавить ведущий 1перед преобразованием!
"ba5eba11" -> 7421737489"05AB1E" -> 17148702"dec0de" -> 31375582"Beef" -> 114415 "da7aba5e" -> 7960443486"500" -> 5376"DENNIS" -> "DENNIS""Garth" -> "Garth""A_B_C" -> "A_B_C""0x000" -> "0x000"
Для справки, вот реализация Python 3, которую я использовал для тестовых случаев (ungolfed):
import re
def convert_name(name):
if re.fullmatch('^[0-9A-Fa-f]+$', name):
return int('1' + name.upper(), base = 16)
else:
return name
|
You are mixing function declarations and code execution. Do not.Your programme should look like:
def fun1():
# do_stuff
def fun2()
# do_stuff
def main():
fun1()
fun2()
if __name__ == "__main__":
main()
Stating the obvious in comments is not good
#adding every word into the wordlist
#wordlist2 = cleaned wordlist (delete non-alphabetical characters)
#wordlist3 = delete empty words ("")', '#count wordlist3 into dictionary:
#print out the first 10 words and values from dictionary:
Comments should explain why not what, if you need comments to describe the what you need better names and maybe more functions.
The following for example should be declared as a function and called later
book = open("bibel.txt", "r")
dict = {}
lines = book.readlines()
wordlist=[]
#adding every word into the wordlist
for line in lines:
words = line.split(" ")
for word in words:
wordlist.append(word)
book.close()
def get_wordlist(filename):
with open(filename) as f:
lines = f.readlines()
# More exaplanation about the following line later
wordlist = [word for line in lines for word in line.split(" ")]
return wordlist
I added in if __name__ == "__main__": because it allows you to import your file without actually running it.
dict = {} is a global, do not use changing globals, (Global Constants are ok).
Also it should be noted that dict is reserved, it is better to write word_dict or dict_.
You should use with open(filename) as f: do_stuff(f.read()) when you open file, it is simpler and handles closing automatically..
You wrote
wordlist=[]
#adding every word into the wordlist
for line in lines:
words = line.split(" ")
for word in words:
wordlist.append(word)
The following is more idiomatic then a for loop with append but may be a little harder to understand. Using it or not is up to personal style.
wordlist = [word for line in lines for word in line.split(" ")]
When a list comprehension is ease you should prefer it over append:
wordlist2=[]
for word in wordlist:
wordlist2.append(clean(word))
should become
wordlist2 = [(clean(word) for word in wordlist]
Checking if a thing is empty in Python is done like if not thing
wordlist3 = [word for word in wordlist2 if word]
Puttting word = word.lower() at the start simplifies things a bit
def clean(word):
word = word.lower()
for char in word:
if char not in "abcdefghijklmnopqrstuvwxyzäüöß":
word = word.replace(char,'')
return word
Clean is not a clear name. You should call that function alphabet_chars_only.
"abcdefghijklmnopqrstuvwxyzäüöÃ" is the alphabet, it is easy to guess it, anyway a global consant at the start of your file would be better:
ALPHABET = "abcdefghijklmnopqrstuvwxyzäüöß"
def clean(word):
word = word.lower()
for char in word:
if char not in ALPHABET:
word = word.replace(char,'')
return word
Just to show the sheer power of list comprehension, you may or may not use the following, it is personal preference:
def clean(word):
word = word.lower()
return ''.join([char for char in word if char in alphabet])
Down there there must be a typo
#print out the first 10 words and values from dictionary: # --- TEN
for i in range(100): # -- A HUNDREAD
topword = max(dict, key=dict.get)
print(topword, dict[topword])
del dict[topword]
You do not use i in the above loop, it is convention to mark an unused variable as _ or __
You may print the top words like the following:
# Credit goes to http://stackoverflow.com/questions/613183/sort-a-python-dictionary-by-value
import operator
topword = max(dict, key=dict.get)
sorted_topword = sorted(x.items(), key=operator.itemgetter(1))
print(sorted_topword[:10])
( lst[:x] means the first x elements of lst)
Python has an official style guide that you should follow when writing good and readable code, it is called Pep8. The best and faster option is to write your code without thinking about it and then run Autopep8 on your script, it will make your script Pep8/compliant with no effort.
Docstrings are triple quoted strings put at the start of a function to give some info about it. It is up to you to decide when a function is simple that it doesn't need one.
Putting all my advice together:
import operator
FILENAME = "bibel.txt"
ALPHABET = "abcdefghijklmnopqrstuvwxyzäüöß"
def get_wordlist(filename):
with open(filename) as f:
lines = f.readlines()
wordlist = [word for line in lines for word in line.split(" ")]
return wordlist
def alphabet_chars(word):
word = word.lower()
return ''.join([char for char in word if char in ALPHABET])
def count(wordlist):
"""
Returns a dict where the words are the keys
and their frequencies are the values.
"""
dict_ = {}
for word in wordlist:
if word in dict_:
dict_[word]=dict_[word]+1
else:
dict_[word]=1
return dict_
def main():
wordlist = get_wordlist(FILENAME)
new_wordlist = [alphabet_chars(word) for word in wordlist if word]
dict_ = count(new_wordlist)
sorted_dict_ = sorted(x.items(), key=operator.itemgetter(1))
print(sorted_dict_[:10])
if __name__ == "__main__":
main()
|
概要
在 sklearn 包中,OneHotEncoder 函数非常实用,它可以实现将分类特征的每个元素转化为一个可以用来计算的值。本篇详细讲解该函数的用法,也可以参考官网 sklearn.preprocessing.OneHotEncoder。
解析
该函数在 sklearn.preprocessing 类中,格式为:
OneHotEncoder(n_values=’auto’, categorical_features=’all’, dtype=<class ‘numpy.float64’>, sparse=True, handle_unknown=’error’)
为了方便理解,我们先看下面一个例子:
# -*- coding: utf-8 -*-
from sklearn.preprocessing import OneHotEncoder
enc = OneHotEncoder()
enc.fit([[0, 0, 3],
[1, 1, 0],
[0, 2, 1],
[1, 0, 2]])
ans = enc.transform([[0, 1, 3]]).toarray() # 如果不加 toarray() 的话,输出的是稀疏的存储格式,即索引加值的形式,也可以通过参数指定 sparse = False 来达到同样的效果
print(ans) # 输出 [[ 1. 0. 0. 1. 0. 0. 0. 0. 1.]]
下面解释输出结果的意思。对于输入数组,这依旧是把每一行当作一个样本,每一列当作一个特征,
我们先来看第一个特征,即第一列 \([0, 1, 0, 1]\),也就是说它有两个取值 0 或者 1,那么 one-hot 就会使用两位来表示这个特征,\([1,0]\) 表示 0, \([0,1]\) 表示 1,在上例输出结果中的前两位 \([1,0...]\) 也就是表示该特征为 0
第二个特征,第二列 \([0,1,2,0]\),它有三种值,那么 one-hot 就会使用三位来表示这个特征,\([1,0,0]\) 表示 0, \([0,1,0]\) 表示 1,\([0,0,1]\) 表示 2,在上例输出结果中的第三位到第六位 \([...0,1,0,0...]\) 也就是表示该特征为 1
第二个特征,第三列 \([3,0,1,2]\),它有四种值,那么 one-hot 就会使用四位来表示这个特征,\([1,0,0,0]\) 表示 0, \([0,1,0,0]\) 表示 1,\([0,0,1,0]\) 表示 2,\([0,0,0,1]\) 表示 3,在上例输出结果中的最后四位 \([...0,0,0,1]\) 也就是表示该特征为 3
好了,到此相信我们已经很明白它的意思了。值得注意的是,虽然训练样本中的数值仅仅代表类别,但是也必须使用数值格式的数据,如果使用字符串格式的数据会报错。
下面解释一下函数中参数的意思,
n_values=’auto’,表示每个特征使用几维的数值由数据集自动推断,即几种类别就使用几位来表示。当然也可以自己指定,看下面这个例子:
# -*- coding: utf-8 -*-
from sklearn.preprocessing import OneHotEncoder
enc = OneHotEncoder(n_values = [2, 3, 4])
enc.fit([[0, 0, 3],
[1, 1, 0]])
ans = enc.transform([[0, 2, 3]]).toarray()
print(ans) # 输出 [[ 1. 0. 0. 0. 1. 0. 0. 0. 1.]]
注意到训练样本中第二个特征列没有类别 2,但是结果中依然将类别 2 给编码了出来,这就是自己指定维数的作用了(我们使用 3 位来表示第二个特征,自然包括了类别 2),第三列特征同样如此。这也告诫我们,如果训练样本中有丢失的分类特征值,我们就必须显示地设置参数 n_values 了,这样防止编码出错。
categorical_features = 'all',这个参数指定了对哪些特征进行编码,默认对所有类别都进行编码。也可以自己指定选择哪些特征,通过索引或者 bool 值来指定,看下例:
# -*- coding: utf-8 -*-
from sklearn.preprocessing import OneHotEncoder
enc = OneHotEncoder(categorical_features = [0,2]) # 等价于 [True, False, True]
enc.fit([[0, 0, 3],
[1, 1, 0],
[0, 2, 1],
[1, 0, 2]])
ans = enc.transform([[0, 2, 3]]).toarray()
print(ans) # 输出 [[ 1. 0. 0. 0. 0. 1. 2.]]
输出结果中前两位 \([1,0]\) 表示 0,中间四位 \([0,0,0,1]\) 表示对第三个特征 3 编码,第二个特征 2 没有进行编码,就放在最后一位。
dtype=<class ‘numpy.float64’>表示编码数值格式,默认是浮点型。
sparse=True表示编码的格式,默认为 True,即为稀疏的格式,指定 False 则就不用 toarray() 了
handle_unknown=’error’,其值可以指定为 "error" 或者 "ignore",即如果碰到未知的类别,是返回一个错误还是忽略它。
方法 transform(X) 就是对 \(X\) 进行编码了。在实际应用中,我们更常用方法 fit_transform(),也就是一步到位,看下例:
# -*- coding: utf-8 -*-
from sklearn.preprocessing import OneHotEncoder
enc = OneHotEncoder(sparse = False)
ans = enc.fit_transform([[0, 0, 3],
[1, 1, 0],
[0, 2, 1],
[1, 0, 2]])
print(ans) # 输出 [[ 1. 0. 1. ..., 0. 0. 1.]
# [ 0. 1. 0. ..., 0. 0. 0.]
# [ 1. 0. 0. ..., 1. 0. 0.]
# [ 0. 1. 1. ..., 0. 1. 0.]]
|
To give credit where credit is due: This problem was taken from the ACMICPC-Northwest Regional Programming Contest. Thank you problem writers.
You are helping an archaeologist decipher some runes. He knows that this ancient society used a Base 10 system, and that they never start a number with a leading zero. He's figured out most of the digits as well as a few operators, but he needs your help to figure out the rest.
The professor will give you a simple math expression, of the form
[number][op][number]=[number]
He has converted all of the runes he knows into digits. The only operators he knows are addition (+),subtraction(-), and multiplication (*), so those are the only ones that will appear. Each number will be in the range from -1000000 to 1000000, and will consist of only the digits 0-9, possibly a leading -, and maybe a few ?s. If there are ?s in an expression, they represent a digit rune that the professor doesn't know (never an operator, and never a leading -). All of the ?s in an expression will represent the same digit (0-9), and it won't be one of the other given digits in the expression. No number will begin with a 0 unless the number itself is 0, therefore 00 would not be a valid number.
Given an expression, figure out the value of the rune represented by the question mark. If more than one digit works, give the lowest one. If no digit works, well, that's bad news for the professor - it means that he's got some of his runes wrong. output -1 in that case.
Complete the method to solve the expression to find the value of the unknown rune. The method takes a string as a paramater repressenting the expression and will return an int value representing the unknown rune or -1 if no such rune exists.
My Solution
import re, operator as op
parse_op = re.compile(r"(-?[0-9?]+)([-+*])(-?[0-9?]+)(=)(-?[0-9?]+)") #For parsing the LHS.
ops = {"*": op.mul, "+": op.add, "-": op.sub}
def solve_runes(s):
search = set("0123456789") - set(c for c in s if c.isnumeric())
n = [0]*3
n[0], op, n[1], x, n[2] = parse_op.search(s).groups()
if any(len(x) > 1 and x[0] == "?" for x in n):
search -= {"0"}
for digit in sorted(search):
v = [int(x.replace("?", digit)) for x in n]
if ops[op](v[0], v[1]) == v[2]:
return int(digit)
return -1
Sample Test Case
test.assert_equals(solve_runes("1+1=?"), 2, "Answer for expression '1+1=?' ")
test.assert_equals(solve_runes("123*45?=5?088"), 6, "Answer for expression '123*45?=5?088' ")
test.assert_equals(solve_runes("-5?*-1=5?"), 0, "Answer for expression '-5?*-1=5?' ")
test.assert_equals(solve_runes("19--45=5?"), -1, "Answer for expression '19--45=5?' ")
test.assert_equals(solve_runes("??*??=302?"), 5, "Answer for expression '??*??=302?' ")
test.assert_equals(solve_runes("?*11=??"), 2, "Answer for expression '?*11=??' ")
test.assert_equals(solve_runes("??*1=??"), 2, "Answer for expression '?*11=??' ")
|
So, I have written a small python code which hangs on exec_command step when using support user and the same works fine while using root user
I have no problems running the script from the terminal when logged in as the support user. This is what I am unable to understand why does paramiko.exec_command hangs.
I have tried running the script from windows 10 running python Python 3.6.6 |Anaconda custom (64-bit)| (default, Jun 28 2018, 11:27:44) [MSC v.1900 64 bit (AMD64)] on win32 as well as ubuntu 19.04 running Python 3.7.3 (default, Mar 27 2019, 22:11:17) [GCC 7.3.0] :: Anaconda, Inc. on linux
The machine I am trying to ssh is running CentOS release 6.9 (Final)Here is ls -ltr on this machine
-rwxr-xr-x. 1 support support 10430264 May 10 12:13 port_check
I have tried adding and removing sudo from the commands to be executed over SSH and also commenting ssh.invoke_shell()
import sys
import paramiko
def pew_print(some_input):
try:
some_input = some_input.decode("utf-8")
except Exception as errors:
print("Errors : {0}".format(errors))
pass
some_input = str(some_input)
sys.stdout.write(some_input)
sys.stdout.write("\n")
sys.stdout.flush()
def ssh_command_output(ssh, command_string):
# ssh.invoke_shell()
stdin, stdout, stderr = ssh.exec_command(command_string, timeout=90)
pew_print(stdout.read())
def something(ip_address):
ssh = paramiko.SSHClient()
ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy())
ssh.connect(hostname=ip_address, port=22, username="support", password="pass")
# ssh_command_output(ssh, "sudo chmod 755 /home/support/port_check")
# ssh_command_output(ssh, "sudo /home/support/port_check")
ssh_command_output(ssh, "ls -ltr")
I expect the result to be the same when using support user when it is using root user
EDIT:
sudo on support user doesn't require password and I have tried just executing ls -ltr from support user and this hangs as well.
I can normally ssh [email protected] and then execute all the above commands
|
å¨ç¬è«åï¼éè¦ç¥éè¿ä¸¤ä¸ªç¥è¯ç¹ï¼ï¼ï¼
å符串转åèç±»å
str --> bytes
encode()
åèç±»å转å符串
bytes --> str
decode()
1.以ä¸ä¸ªç®åçä¾å讲解urllib.requestæ¹æ³
read 读åç¸åºå
容ï¼å
容
geturl è·å请æ±çurl
getheaders è·å头é¨ä¿¡æ¯
getcode è·åç¶æç
readlines æè¡è¯»åï¼è¿åå表ï¼é½æ¯åèç±»å
1.1è·åç¾åº¦çç½é¡µä»£ç
import urllib.request
url = "https://www.baidu.com"
response = urllib.request.urlopen(url)
#打å°å‡ºæ¥æ˜¯äºŒè¿›åˆ¶
print(response.read().decode('utf8'))
# 获å–状æ€ç
print(response.getcode())
# 获å–å“应头
print(response.getheaders())
# 获å–url
print(response.geturl())
# 把读å–到的内容ä¿å˜èµ·æ¥çš„ä¸¤ç§æ–¹æ³•
# ç¬¬ä¸€ç§æ–¹æ³•w以å—符串写入
with open("baidu.html","w",encoding='utf8') as f:
f.write(response.read().decode('utf8'))
#ç¬¬äºŒç§æ–¹æ³• wb以二进制写入
with open("baidu.html","wb") as f:
f.write(response.read())
1.2ç¬åç¾åº¦éé¢çå¾ç为ä¾å
import urllib.request
image_url = 'https://timgsa.baidu.com/timg?image&quality=80&size=b9999_10000&sec=
1561806237168&di=319407f10b4c55baf1d5b905d8a2f20a&imgtype=0&src=http%3A%2F%2
Fimg2.ph.126.net%2F2zB3_wWPXlEW0RdwQa8d6A%3D%3D%2F2268688312388037455.jpg'
#ç¬¬ä¸€ç§æ–¹æ³•
# response = urllib.request.urlopen(image_url)
#åƒå›¾ç‰‡åªèƒ½å†™å…¥åˆ°æœ¬åœ°äºŒè¿›åˆ¶çš„æ ¼å¼
# with open('qing.jpg','wb') as fp:
# fp.write(response.read())
#ç¬¬äºŒç§æ–¹æ³•
urllib.request.urlretrieve(image_url,'chun.jpg')
2.以ä¸ä¸ªç®åçä¾å讲述parseæ¹æ³
quote urlç¼ç 彿°ï¼å°ä¸æè¿è¡è½¬å为%xxx
unquote urlè§£ç 彿°ï¼å°%xxx转å为æå®å符
urlencode ç»ä¸ä¸ªåå
¸ï¼å°åå
¸æ¼æ¥ä¸ºquery_string
2.1èªå·±åä¸ä¸ªurlè¿è¡æä½
import urllib.parse
url = 'http://www.baidu.com/index.html'
#http://www.baidu.com/index.html?name=goudan&age=18&sex=nv&height=180
name = '狗蛋'
age = 18
sex = '女'
height = '180'
data={
'name':name,
'age':age,
'sex':sex,
'height':height
}
#ç¬¬ä¸€ç§æ–¹æ³•
query_string = urllib.parse.urlencode(data)
print(query_string)
#ç¬¬äºŒç§æ–¹æ³•
#é历å—å…¸
# it = []
# for k,v in data.items():
# it.append(k+'='+str(v))
# query_string = '&'.join(it)
url = url+'?'+query_string
print(url)
è¿å°±æ¯requestä¸parseæ¹æ³!!!
|
先日のの続きと言えば続きでありますし、そうではないと言えばそうではありません。当たり前です。
前回は Django の機能である Middleware を用いてリクエスト時にフックして認証を行うみたいなやり方を参照させていただきましたけど(例のあれ(仮題)- DjangoでBASIC認証とかHTTPSスキーム強制とか。)、似たような仕組みが WSGI にありまして mod_wsgi 経由で使えるってんでやってみました。あと、こちらを使うと Django の User で認証もできるっていうのもなかなかにありがたいのでこれも採用。
公式にきちんとドキュメント化されておりますので基本的には迷いようはないのですけど、今回導入先のサーバーさまにおきましては daemon mode と virtualenv を使って複数の Python 環境で複数の Django アプリを設置(How to use Django with Apache and mod_wsgi | Django documentation | Django)している関係上上手く動かせなかったのですよね。や、多分そういうものなんじゃないかと思うんですよ、よくわかっていないので間違っているとも思いますけど。
具体的には WSGIAuthUserScript に process-group を渡す方法がわからなくて動かなかったんです、Django へ python-path を通すことができなくて「 import できないし」とか怒られるあれ。そんなのパスが通っているところに Django を設置すればいいし、起動スクリプトを書き換えてパスを通しちゃえばいいじゃない。と思いますよね、思いますよ、思いますけどそんなの嫌じゃない、なんか負けた気するじゃない、設定とかでなんとかしたいじゃない。
とかであれこれ試してみましたけど、結局無理で Django アプリケーション側の wsgi.py を書き換えたのですけどね……。
import os
import sys
sys.path.append('`Django アプリケーションへのパス`')
sys.path.append('`virtualenvの環境へのパス`/lib/python2.7/site-packages')
os.environ.setdefault("DJANGO_SETTINGS_MODULE", "`プロジェクト名`.settings")
from django.contrib.auth.handlers.modwsgi import check_password
from django.core.wsgi import get_wsgi_application
application = get_wsgi_application()
底辺おじさんはこの程度のことでうおさおできておめでたい日々である。
|
blob: 7a2bfd855ae1bdec27e65ea7dd543d9ea0892b3a (
plain
)
# -*- coding: utf-8 -*-
# Copyright 2010-2012 Kolab Systems AG (http://www.kolabsys.com)
#
# Jeroen van Meeuwen (Kolab Systems) <vanmeeuwen a kolabsys.com>
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation; version 3 or, at your option, any later version
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Library General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA.
#
import commands
import pykolab
from pykolab.translate import _
log = pykolab.getLogger('pykolab.cli')
conf = pykolab.getConf()
auth = pykolab.auth
imap = pykolab.imap
def __init__():
commands.register('list_domains', execute, description="List Kolab domains.")
def execute(*args, **kw):
auth.connect()
# Create the authentication object.
# TODO: Binds with superuser credentials!
domains = auth.list_domains()
print "%-39s %-40s" %("Primary Domain Name Space","Secondary Domain Name Space(s)")
# TODO: Take a hint in --quiet, and otherwise print out a nice table
# with headers and such.
for domain,domain_aliases in domains:
if len(domain_aliases) > 0:
print _("%-39s %-40s") %(
domain,
', '.join(domain_aliases)
)
else:
print _("%-39s") %(domain)
|
Hàm sleep() trong Python
Hàm sleep() trong Python sử dụng để dừng thực thi luồng hiện tại trong số giây truyền vào.
Ví dụ 1: Sử dụng sleep()
import time
print ("In kết quả ra màn hình ngay lập tức.")
time.sleep(3)
print ("In kết quả ra màn hình sau 3s.")
Phương thức này không trả về bất cứ giá trị nào mà chỉ delay trình thực thi, hoạt động như này:
Thực thi tác vụ để hiển thị kết quả "In kết quả ra màn hình ngay lập tức."
Trì hoãn trình thực thi trong vòng 3s.
Tiếp tục thực thi tác vụ và hiển thị "In kết quả ra màn hình sau 3s."
Ví dụ 2: Tạo đồng hồ điện tử trong Python
import time
while True:
localtime = time.localtime()
result = time.strftime("%I:%M:%S %p", localtime)
print(result)
time.sleep(1)
Ở chương trình trên, Quantrimang đã tạo và in ra thời gian cục bộ bên trong một vòng lặp while vô hạn. Sau khi in ra kết quả, trình thực thi sẽ delay trong vòng 1 giây rồi tiếp tục in ra thời gian hiện tại. Quá trình này diễn ra liên tục nhờ vòng lặp while, tạo thành một đồng hồ điện tử trong Python.
11:58:31 AM 11:58:32 AM 11:58:33 AM 11:58:34 AM 11:58:35 AM 11:58:36 AM 11:58:37 AM 11:58:38 AM ... .. ...
Hoặc một cách khác làm đồng hồ điện tử:
import time
while True:
localtime = time.localtime()
result = time.strftime("%I:%M:%S %p", localtime)
print(result, end="", flush=True)
print("\n", end="", flush=True)
time.sleep(1)
Đa luồng trong Python
Trước khi nói về sleep() trong các chương trình đa luồng, ta hãy đề cập một chút đến Process và Thread.
Processlà quá trình hoạt động của một chương trình.
Threadlà một bước điều hành bên trong mộtprocess.Mộtprocesscó thể chứa nhiềuthreadbên trong nó.
Ví dụ 3: Đa luồng Python
import threading
def print_hello_three_times():
for i in range(3):
print("Hello")
def print_hi_three_times():
for i in range(3):
print("Hi")
t1 = threading.Thread(target=print_hello_three_times)
t2 = threading.Thread(target=print_hi_three_times)
t1.start()
t2.start()
Chạy chương trình, kết quả output ra màn hình sẽ có dạng:
Hello Hello Hi Hello Hi Hi
Chương trình trên có hai luồng t1 và t2. Các luồng này được chạy bằng cách sử dụng các câu lệnh t1.start() và t2.start().
Lưu ý rằng, t1 và t2 chạy đồng thời và bạn có thể nhận được output khác nhau.
time.sleep() trong các chương trình đa luồng
Hàm sleep() tạm dừng thực thi luồng hiện tại trong một số giây nhất định.
Trong trường hợp các chương trình đơn luồng, sleep() tạm dừng thực thi luồng và xử lý. Tuy nhiên, trong các chương trình đa luồng, hàm này chỉ tạm dừng một luồng thay vì toàn bộ quá trình đa luồng.
Ví dụ 4: sleep() trong chương trình đa luồng
import threading
import time
def print_hello():
for i in range(4):
time.sleep(0.5)
print("Hello")
def print_hi():
for i in range(4):
time.sleep(0.7)
print("Hi")
t1 = threading.Thread(target=print_hello)
t2 = threading.Thread(target=print_hi)
t1.start()
t2.start()
Kết quả có dạng:
Hello Hi Hello Hi Hello Hello Hi Hi
Chương trình trên có hai thread. Ở đây ta đã sử dụng time.sleep(0.5) và time.sleep(0.75) để tạm dừng thực hiện hai luồng này trong 0.5 giây và 0.7 giây tương ứng.
|
Lệnh if, if...else, if...elif...else trong Python
Trong phần trước chúng ta đã tìm hiểu qua về một số kiểu dữ liệu trong Python và cách dùng, cũng biết một chút về lệnh While trong Python. Ở phần này, bạn sẽ biết thêm về lệnh phổ biến nhất trong Python là if.
Nếu đã từng học ngôn ngữ lập trình khác hẳn bạn đã biết công dụng của lệnh này, nhưng trong ngôn ngữ lập trình Python nó có thêm một số đặc điểm khá thú vị. Hãy cùng tìm hiểu nhé.
Việc ra quyết định là cần thiết khi chúng ta muốn thực thi một đoạn code chỉ khi nó thỏa mãn kiện nào đó. Lệnh if...elif...else được sử dụng trong Python để phục vụ cho mục đích này. Sau đây chúng ta sẽ đi tìm hiểu về các câu lệnh if trong Python, mỗi mục đều có ví dụ và diễn giải cụ thể để bạn hiểu rõ.
Nội dung chính:
Cấu trúc lệnh if trong Python
if điều kiện
khối lệnh
Ở đây, chương trình đánh giá điều kiện và sẽ thực hiện các lệnh khi điều kiện là True. Nếu điều kiện False thì lệnh sẽ không được thực hiện.
Trong Python, khối lệnh của lệnh if được viết thụt lề vào trong. Khối lệnh của if bắt đầu với một khoảng thụt lề và dòng không thụt lề đầu tiên sẽ được hiểu là kết thúc lệnh if.
Sơ đồ lệnh if trong Python
Ví dụ 1:
# Nếu là số dương ta sẽ in một thông điệp thích hợp
num = 3
if num > 0:
print(num, "là số dương.")
print("Thông điệp này luôn được in.")
num = -1
if num > 0:
print(num, "là số dương.")
print("Thông điệp này cũng luôn được in.")
Kết quả đầu ra của chương trình trên:
3 là số dương.Thông điệp này luôn được in.Thông điệp này cũng luôn được in.
Trong ví dụ trên, num > 0 là điều kiện, khối lệnh của if được thực thi khi thỏa mãn điều kiện. Khi num bằng 3, kiểm tra điều kiện, thấy đúng, khối lệnh của if được thực thi. Khi num bằng -1, không khỏa mãn điều kiện, khối lệnh của if bị bỏ qua và thực hiện lệnh print() cuối cùng.
Chú ý kỹ hơn một chút, bạn sẽ thấy rằng lệnh print() không được viết thụt lề, điều này nói lên rằng, print() nằm ngoài khối lệnh if, nên nó sẽ được thực hiện, bất kể điều kiện là gì.
Lệnh if...else
Cấu trúc lệnh if...else
if điều kiện:
Khối lệnh của if
else:
Khối lệnh của else
Lệnh if...else kiểm tra điều kiện và thực thi khối lệnh if nếu điều kiện đúng. Nếu điều kiện sai, khối lệnh của else sẽ được thực hiện. Thụt đầu dòng được sử dụng để tách các khối lệnh.
Sơ đồ lệnh if...else
Ví dụ 2:
# Kiem tra xem so am hay duong
# Va hien thi thong bao phu hop
num = 3
if num >= 0:
print("So duong hoac bang 0")
else:
print("So am")
num1=-1
if num1 >= 0:
print("So duong hoac bang 0")
else:
print("So am")
Trong ví dụ 2, ta kiểm tra 2 biến là num và num1. Khi num bằng 3, thỏa mãn điều kiện num>=0 nên khối lệnh của if được thực hiện. num1=-1 không thỏa mãn điều kiện num1>=0 nên khối lệnh của else được thực hiện và bỏ qua khối lệnh của if.
Kết quả sẽ của lệnh trên sẽ in ra màn hình hai dòng: dòng trên là kết quả kiểm tra biến num và dòng dưới là biến num1.
So duong hoac bang 0So am
Lệnh if...elif...else trong Python
Cấu trúc lệnh if...elif...else
if điều kiện:
Khối lệnh của if
elif test expression:
Khối lệnh của elif
else:
Khối lệnh của else
elif là viết gọn của else if, nó cho phép chúng ta kiểm tra nhiều điều kiện.
Nếu điều kiện là sai, nó sẽ kiểm tra điều kiện của khối elif tiếp theo và cứ như vậy cho đến hết.
Nếu tất cả các điều kiện đều sai nó sẽ thực thi khối lệnh của else.
Chỉ một khối lệnh trong if...elif...else được thực hiện theo điều kiện.
Có thể không có hoặc có nhiều elif, phần else là tùy chọn.
Sơ đồ của lệnh if...elif...else
Ví dụ 3:
x = int(input("Nhap mot so: "))
if x < 0:
x = 0
print('So am')
elif x == 0:
print('So 0')
elif x == 1:
print('So 1')
else:
print('So duong')
Kết quả đầu ra:
Nếu x là số âm thì in ra màn hình: "So am".
Nếu x = 0 thì sẽ in: "So 0".
Nếu x = 1 thì sẽ in: "So 1".
Nếu cả 3 điều kiện trên đều sai thì in: "So duong".
Lệnh if ... elif ... elif ... là sự thay thế cho câu lệnh switch hay case trong các ngôn ngữ lập trình khác.
Lệnh if lồng nhau trong Python
Bạn có thể viết lệnh if...elif...else trong một khối lệnh if...elif...else khác, và tạo thành lệnh if lồng nhau. Không giới hạn số lệnh được lồng vào lệnh khác. Thụt đầu dòng là cách duy nhất để nhận diện mức độ lồng, do đó nó có thể gây rối, nhầm lẫn. Bạn nên hạn chế sử dụng nếu có thể.
Ví dụ 4:
# Trong code này, nhập vào một số
# Kiểm tra xem số âm hay dương
# hay bằng không và hiển thị
# thông báo thích hợp
# Sử dụng hàm if lồng nhau
num = float(input("Nhập một số: "))
if num >= 0:
if num == 0:
print("Số Không")
else:
print("Số dương")
else:
print("Số âm")
Kết quả 1:
Nhập một số: 10Số dương
Kết quả 2:
Nhập một số: -5Số âm
Kết quả 3:
Nhập một số: 0Số Không
Đến đây bạn đã nắm được những yếu tố cơ bản khi sử dụng lệnh if trong Python rồi. Phần tiếp theo chúng ta sẽ tìm hiểu về vòng lặp for. Các bạn nhớ theo dõi nhé.
Bài tiếp: Vòng lặp for trong Python
|
蝗櫁サ「陦悟�励′縺ェ縺懊≠縺ョ蠖「縺ェ縺ョ縺区ー励↓縺ェ縺」縺�
2テ�2縺ョ蝗櫁サ「陦悟�励�ッ
$$ツ・begin{bmatrix}ツ・cos{ツ・theta} & -ツ・sin{ツ・theta} ツ・ツ・ sin{ツ・theta} & ツ・cos{ツ・theta}ツ・end{bmatrix}$$
縺ェ縺懊%縺ョ蠖「繧偵@縺ヲ縺�繧九�ョ縺九�『ikipedia縺ァ縺ッ逵∫払縺輔l縺ヲ縺�縺溘�り�ス蜉帙′荳崎カウ縺励※縺�繧九�ョ縺九�∫怐逡・縺輔l縺ヲ縺�繧九�ョ縺ォ繧上°繧峨↑縺九▲縺溘��
蝗ウ蠖「逧�閠�蟇溘∪縺溘�ッ荳芽ァ帝未謨ー縺ョ蜉�豕募ョ夂炊繧医j縲』 ', y ' 縺ッ莉・荳九�ョ繧医≧縺ォ陦ィ縺輔l繧九%縺ィ縺悟�縺九k縲�
縺ィWikipedia縺ォ縺ッ譖ク縺九l縺ヲ縺�縺溘�ゅ@縺九@縲∬�ェ蛻�縺ッ蛻�縺九i縺ェ縺九▲縺溘��
蜑阪��縺九i豌励↓縺ッ縺ェ縺」縺ヲ縺�縺溘′縲√ヲ繝ウ繝医′web繧オ繧、繝井サ・螟紋ス輔b縺ェ縺九▲縺溘�ョ縺ァ繧上°繧峨↑縺九▲縺溘��
莉頑怦縲√�瑚。悟�励→繝吶け繝医Ν縺ョ縺ッ縺ェ縺� 謾ケ險ら沿縲阪r隱ュ縺ソ蟋九a縺ヲ縲∽ク�谺。螟画鋤縺ョ遶��シ�4迚医�ョ119P縺ゅ◆繧奇シ峨�ョ蠑上°繧会シ亥屓霆「陦悟�励↓縺、縺�縺ヲ縺ッ譖ク縺九l縺ヲ縺�縺ェ縺�繧医≧縺ォ諤昴≧縲ゑシ峨�√�後%繧薙↑諢溘§縺ョ繝弱Μ縺ァx,y繧呈アゅa繧後�ー縺�縺�縺ョ縺九=縲阪→諤昴>繧�縺」縺ヲ縺ソ縺溘��
縺薙s縺ェ諢溘§縺ョ繝弱Μ驛ィ蛻�繧偵�∝、ェ縺�襍、譁�蟄励↓縺励※縺翫¥縲�
鬆大シオ縺」縺ヲ縲∬�ェ蛻�縺檎エ榊セ励�ョ縺�縺剰ェャ譏弱r閠�縺医※縺ソ縺溘��
蜈医★縺ッ縲∝屓霆「陦悟�励�ョ菴ソ縺�譁ケ繧呈嶌縺�縺ヲ縺翫¥縲�
蝗櫁サ「螟画鋤縺励◆縺�轤ケ縺ョ蠎ァ讓呻シ医う繝ウ繝励ャ繝茨シ峨�ッ縲�
$$ツ・begin{Bmatrix}x_{ツ・rm{i}}ツ・ツ・y_{ツ・rm{i}}ツ・end{Bmatrix}$$
縺薙%縺ォ縺ッ蜈キ菴鍋噪縺ェ蠎ァ讓吶′蜈・繧九��
蝗櫁サ「螟画鋤蠕後�ョ蠎ァ讓呻シ�xo,yo�シ峨�ッ縲�
$$ツ・begin{Bmatrix}x_{ツ・rm{o}}ツ・ツ・y_{ツ・rm{o}}ツ・end{Bmatrix} = ツ・begin{bmatrix}ツ・cos{ツ・theta}&-ツ・sin{ツ・theta} ツ・ツ・ ツ・sin{ツ・theta} & ツ・cos{ツ・theta}ツ・end{bmatrix}ツ・timesツ・begin{Bmatrix}x_{ツ・rm{i}}ツ・ツ・y_{ツ・rm{i}}ツ・end{Bmatrix}$$
陦悟�励�ョ縺九¢邂励〒縲∝、画鋤蠕後�ョ蠎ァ讓凅o,yo縺梧焔縺ォ蜈・繧九��
縺薙�ョ蝗櫁サ「陦悟�励′縺ェ縺懊%縺ョ蠖「縺ォ縺ェ繧九�ョ縺九o縺九i縺ェ縺九▲縺溘��
螟壼�縲∽ク�逡ェ繧上°繧翫d縺吶>繧オ繧、繝医�ッ縲√%縺薙��Gif繧「繝九Γ繧ょ��螳溘@縺ヲ縺�繧九@縲�
縺励°縺励�∽ク願ィ倥し繧、繝医〒繧ゆス輔°邏榊セ励>縺九↑縺九▲縺溘��
縺�繧阪>繧榊峙繧呈緒縺�縺ヲ縺ソ縺ヲ縲∽ク九�ョ繧�繧頑婿繧定��縺医◆縲�
萓九∴縺ーExcel縺ョ謨」蟶�蝗ウ縺ァ縲∝��繧呈緒縺阪◆縺�縺ィ縺吶k縲や�貞��蜻ィ荳翫�ョx,y蠎ァ讓吶′蠢�隕√��
x,y蠎ァ讓吶�ッ縲∽ク芽ァ帝未謨ー繧剃スソ縺医�ー蜃コ縺帙k縲�
縺励°縺励�』霆ク縺九i蝗櫁サ「縺悟ァ九∪繧句�エ蜷医→縲【霆ク縺九i蝗櫁サ「縺悟ァ九∪繧句�エ蜷医〒縲∝��縺ョ譖ク縺肴婿縺碁&縺�縲�
蜀�縺ョx,y蠎ァ讓吶�ッ (x,y) = (cosホク,sinホク)縺ァ縺�縺帙k縲�
x霆ク縺九i蝗櫁サ「縺怜ァ九a繧九→縺吶k縺ィ縲�
竊槻ク=0縺ョ譎ゅ��(1,0)
(cos0 = 1, sin0 = 0)縺�縺励�√≠縺」縺ヲ繧九��
蝗櫁サ「蠕後�ョ蠎ァ讓吶�ッ縲∫エー縺九>謨ー蛟、縺ェ縺ョ縺ァ譖ク縺九↑縺�縲�
x霆ク縺九i蜃コ逋コ縺励◆縲∬ァ貞コヲ0縺九i60蠎ヲ蝗櫁サ「縺輔○縺溷シァ縺梧緒縺代◆縲�
xi�シ晢シ代§繧�縺ェ縺上※繧ゅ〒縺阪◎縺�縲�
xo = cosホクテ踊iyo = sinホクテ踊i
x謌仙�縺励°閠�縺医※縺�縺ェ縺�縺ョ縺ァ縲【i縺ッ蜃コ縺ヲ縺薙↑縺�縲�
陦悟�励↓縺励※縺翫¥縺ィ縲�
$$ツ・begin{Bmatrix}x_{ツ・rm{o}}ツ・ツ・y_{ツ・rm{o}}ツ・end{Bmatrix} = ツ・begin{bmatrix}ツ・cos{ツ・theta}& 0 ツ・ツ・ ツ・sin{ツ・theta} & 0 ツ・end{bmatrix}ツ・timesツ・begin{Bmatrix}x_{ツ・rm{i}}ツ・ツ・0ツ・end{Bmatrix}$$
萓九∴縺ー縲�
$$ツ・theta = 60ツ・ツ・ツ・begin{Bmatrix}x_{ツ・rm{o}}ツ・ツ・y_{ツ・rm{o}}ツ・end{Bmatrix} = ツ・begin{bmatrix}ツ・cos{60}& 0 ツ・ツ・ ツ・sin{60} & 0 ツ・end{bmatrix}ツ・timesツ・begin{Bmatrix}x_{ツ・rm{i}}ツ・ツ・0ツ・end{Bmatrix}ツ・ツ・ = ツ・begin{bmatrix}0.866ツ・cdots & 0 ツ・ツ・ 0.5 & 0 ツ・end{bmatrix}ツ・timesツ・begin{Bmatrix}x_{ツ・rm{i}}ツ・ツ・0ツ・end{Bmatrix}$$
cos60縺ィ縺九�ッ縲√さ繝ウ繝斐Η繝シ繧ソ縺ァ蜃コ縺励∪縺励◆竊托シ井ク芽ァ帝未謨ー縺ッ縲√�槭け繝ュ繝シ繝ェ繝ウ螻暮幕縺ァ菴懊▲縺ヲ繧九i縺励>繧医�ゑシ�
縺薙l縺ァ縲』謌仙�縺ョ縺ソ縺ョ蝣エ蜷医�ッ縲∝屓霆「縺輔○繧峨l繧九h縺�縺ォ縺ェ縺」縺溘��
蜀�縺ョx,y蠎ァ讓吶�ッ (x,y) = (sin, cos)縺ァ繧ゅ�∝�コ縺帙k縲�
縺オ縺、縺�縺ッ縲�(x,y) = (cos,sin)縺�縺後��騾�縺ォ縺励※繧よ緒縺代k縲よ緒縺代k縺」縺溘i謠上¢繧九�ョ縺�縲�
縺薙�ョ蜀�縺ッ縲〆霆ク縺九i蝗櫁サ「縺怜ァ九a繧九��
竊槻ク=0縺ョ譎ゅ��(0,1)
(sin0 = 0, cos0 = 1)縺�縺励�√≠縺」縺ヲ繧九�ゅ�ゅ��
縺ィ諤昴>縺阪d縲�騾イ陦梧婿蜷代�ョ蜷代″縺碁&縺�縲�
sin縺ッ髱槫ッセ遘ー縺ェ髢「謨ー縺ェ縺ョ縺ァ縲∝、画焚繧偵�励Λ繧ケ縺ォ縺吶k縺九�槭う繝翫せ縺ォ縺吶k縺九〒縲∝�、縺悟、峨o縺」縺ヲ縺上k縲�
sinホク縺ョ縺セ縺セ縺�縺ィ縲∃ク縺悟「励∴繧九→縲』繧ょ「励∴繧九��
竊遅+蛛エ�シ域凾險亥屓繧翫�ゑシ峨↓蝗櫁サ「縺吶k縲や�貞屓霆「縺輔○縺溘>縺ョ縺ッ蜿肴凾險亥屓繧翫��
螟画焚繧偵�槭う繝翫せ縺ォ縺吶k縺ョ繧ょ、峨↑縺ョ縺ァ縲《inホクテ�-1縺励※縺翫¥縲�
縺ィ縺�縺�莠九〒縲∝��縺ョx,y蠎ァ讓吶�ッ (x,y) = (-sinホク, cos)縺ァ縺�縺帙k縲�
xo = -sinホクテ遥iyo = cosホクテ遥i
yi謌仙�縺励°閠�縺医※縺�縺ェ縺�縺ョ縺ァ縲』i縺ッ蜃コ縺ヲ縺薙↑縺�縲�
陦悟�励↓縺励※縺翫¥縺ィ縲�
$$ツ・begin{Bmatrix}x_{ツ・rm{o}}ツ・ツ・y_{ツ・rm{o}}ツ・end{Bmatrix} = ツ・begin{bmatrix} 0 & -ツ・sin{ツ・theta} ツ・ツ・ 0 & ツ・cos{ツ・theta} ツ・end{bmatrix}ツ・timesツ・begin{Bmatrix}0ツ・ツ・y_{ツ・rm{i}}ツ・end{Bmatrix}$$
萓九∴縺ー縲�
$$ツ・theta = 60ツ・ツ・ツ・begin{Bmatrix}x_{ツ・rm{o}}ツ・ツ・y_{ツ・rm{o}}ツ・end{Bmatrix} = ツ・begin{bmatrix}0 & -ツ・sin{60}ツ・ツ・ 0 & ツ・cos{60} ツ・end{bmatrix}ツ・timesツ・begin{Bmatrix}0ツ・ツ・y_{ツ・rm{i}}ツ・end{Bmatrix}ツ・ツ・ = ツ・begin{bmatrix} 0 & -0.5 ツ・ツ・ 0 & 0.866ツ・cdots ツ・end{bmatrix}ツ・timesツ・begin{Bmatrix}0ツ・ツ・y_{ツ・rm{i}}ツ・end{Bmatrix}$$
縺薙l縺ァ縲�ス呎�仙�縺ョ縺ソ縺ョ蝣エ蜷医↓繧ゅ�∝屓霆「縺輔○繧峨l繧九h縺�縺ォ縺ェ縺」縺溘��
荳願ィ�3繧定。悟�励↓逶エ縺吶→縲�
$$ツ・begin{Bmatrix}x_{ツ・rm{o}}ツ・ツ・y_{ツ・rm{o}}ツ・end{Bmatrix} = ツ・begin{bmatrix}ツ・cos{ツ・theta}&-ツ・sin{ツ・theta} ツ・ツ・ ツ・sin{ツ・theta} & ツ・cos{ツ・theta}ツ・end{bmatrix}ツ・timesツ・begin{Bmatrix}x_{ツ・rm{i}}ツ・ツ・y_{ツ・rm{i}}ツ・end{Bmatrix}$$
譛�蛻昴�ョ蠑上→蜈ィ縺丞酔縺倥□縺代←縲∵嶌縺�縺。繧�縺」縺溘●縲�
蝗櫁サ「陦悟�励′蜿悶j蜃コ縺帙∪縺励◆縺ュ縲�
譛�霑第ー励▼縺�縺溘�ョ縺ッ�シ井サ頑�昴>縺、縺�縺溘�ョ縺ッ�シ峨�∽サ紋ココ縺ョ繧オ繧、繝医↓python繧ウ繝シ繝峨′縺ゅ▲縺ヲ繧ゅ�√o縺悶o縺悶さ繝斐�壹@縺ヲ螳溯。後↑繧薙°縺励※縺ェ縺�縲りゥヲ縺輔↑縺�縲�
縺昴b縺昴b縲∬ヲ九※蜍牙シキ縺吶i縺励※縺ェ縺�縲�
縲後≧繧薙≧繧薙�√〒縺阪k繧薙□縺ュ縲ゅ�阪→縺�縺」縺滓─諠ウ縺励°謖√※縺ェ縺�縲�
縺ェ繧峨�∵悽譁�縺ョ蜑阪�ョ縺サ縺�縺ァ繧ケ繝壹�シ繧ケ繧貞頃繧√k繧医j縲∵怙蠕後↓謚シ縺苓セシ繧√◆縺サ縺�縺後�∬ェュ縺ソ謇九�ョ遶句�エ縺ォ遶九▲縺ヲ縺�繧九�ョ縺ァ縺ッ縺ェ縺�縺九→諤昴>縺セ縺励◆縲よャイ縺励>�シ郁ヲ九◆縺�シ峨�ョ縺ッ邨オ繧�繧「繝九Γ縲�
�シ域悽蠖薙�ッ縲∵枚遶�蜈ィ菴薙�ョ隕矩�壹@縺梧が縺九▲縺溘�ョ縺ァ縲√さ繝シ繝峨′驍ェ鬲斐↓縺ェ縺」縺溘□縺代�ゑシ�
縺ェ縺ョ縺ァ縲√%縺ョ菴咲スョ縲�
譚・譛医↓縺ッ蠢倥l縺ヲ縺�繧九°繧ゅ↑縲�
import numpy as np
import matplotlib.pyplot as plt
N = 500
# gray: circle(360[deg])
theta1 = 2 * np.pi
n1 = np.linspace(0, theta1, N)
x1 = np.cos(n1)
y1 = np.sin(n1)
plt.plot(x1,y1, "-", c="gray")
# red: 60[deg]
theta2 = 1.0 / 6 * 2 * np.pi
n2 = np.linspace(0, theta2, N)
x2 = np.cos(n2) #x2 = np.sin(n2) #x2 = -np.sin(n2)
y2 = np.sin(n2) #y2 = np.cos(n2)
plt.plot(x2,y2, "-", c="red", lw=3)
# Appearance
plt.xlim(-1,1)
plt.ylim(-1,1)
plt.axes().set_aspect('equal')
plt.grid(which='major',color='gray',linestyle='-')
ax = plt.gca() # get current axis
ax.spines['right'].set_color('none')
ax.spines['top'].set_color('none')
ax.xaxis.set_ticks_position('bottom')
ax.spines['bottom'].set_position(('data',0))
ax.yaxis.set_ticks_position('left')
ax.spines['left'].set_position(('data',0))
plt.show()
|
The Case for Numba in Community Code
The numeric Python community should consider adopting Numba more widely within community code.
Numba is strong in performance and usability, but historically weak in ease of installation and community trust. This blogpost discusses these these strengths and weaknesses from the perspective of a OSS library maintainer. It uses other more technical blogposts written on the topic as references. It is biased in favor of wider adoption given recent changes to the project.
Let’s start with a wildly unprophetic quote from Jake Vanderplas in 2013:
I’m becoming more and more convinced that Numba is the future of fast scientific computing in Python.
– Jake Vanderplas, 2013-06-15
We’ll use the following blogposts by other community members throughout this post. They’re all good reads and are more technical, showing code examples, performance numbers, etc..
https://flothesof.github.io/optimizing-python-code-numpy-cython-pythran-numba.html
https://dionhaefner.github.io/2016/11/suck-less-scientific-python-part-2-efficient-number-crunching/
http://jakevdp.github.io/blog/2013/06/15/numba-vs-cython-take-2/
http://jakevdp.github.io/blog/2015/02/24/optimizing-python-with-numpy-and-numba/
http://stephanhoyer.com/2015/04/09/numba-vs-cython-how-to-choose/
https://murillogroupmsu.com/numba-versus-c/
At the end of the blogpost these authors will also share some thoughts on Numba today, looking back with some hindsight.
Disclaimer: I work alongside many of the Numba developers within the same company and am partially funded through the same granting institution.
Compiled code in Python
Many open source numeric Python libraries need to write efficient low-level code that works well on Numpy arrays, but is more complex than the Numpy library itself can express. Typically they use one of the following options:
C-extensions:mostly older projects like NumPy and Scipy
Cython:probably the current standard for mainline projects, like scikit-learn, pandas, scikit-image, geopandas, and so on
Standalone C/C++codebases with Python wrappers: for newer projects that target inter-language operation, like XTensor and Arrow
Each of these choices has tradeoffs in performance, packaging, attracting new developers and so on. Ideally we want a solution that is …
Fast:about as fast as C/Fortran
Easy:Is accessible to a broad base of developers and maintainers
Builds easily:Introduces few complications in building and packaging
Installs easily:Introduces few install and runtime dependencies
Trustworthy:Is well trusted within the community, both in terms of governance and long term maintenance
The two main approaches today, Cython and C/C++, both do well on most of these objectives. However neither is perfect. Some issues that arise include the following:
Cython
Often requires effort to make fast
Is often only used by core developers. Requires expertise to use well.
Introduces mild packaging pain, though this pain is solved frequently enough that experienced community members are used to dealing with it
Standalone C/C++
Sometimes introduces complex build and packaging concerns
Is often only used by core developers. These projects have difficulty attracting the Python community’s standard developer pool (though they do attract developers from other communities).
There are some other options out there like Numba and Pythran that, while they provide excellent performance and usability benefits, are rarely used. Let’s look into Numba’s benefits and drawbacks more closely.
Numba Benefits
Numba is generally well regarded from a technical perspective (it’s fast, easy to use, well maintained, etc.) but has historically not been trusted due to packaging and community concerns.
In any test of either performance or usability Numba almost always wins (or ties for the win). It does all of the compiler optimization tricks you expect. It supports both for-loopy code as well as Numpy-style slicing and bulk operation code. It requires almost no additional information from the user (assuming that you’re ok with JIT behavior) and so is very approachable, and very easy for novices to use well.
This means that we get phrases like the following:
https://dionhaefner.github.io/2016/11/suck-less-scientific-python-part-2-efficient-number-crunching/
“This is rightaway faster than NumPy.”
“In fact, we can infer from this that numba managed to generate pure C code from our function and that it did it already previously.”
“Numba delivered the best performance on this problem, while still being easy to use.”
https://dionhaefner.github.io/2016/11/suck-less-scientific-python-part-2-efficient-number-crunching/
“Using numba is very simple; just apply the jit decorator to the function you want to get compiled. In this case, the function code is exactly the same as before”
“Wow! A speedup by a factor of about 400, just by applying a decorator to the function. “
http://jakevdp.github.io/blog/2015/02/24/optimizing-python-with-numpy-and-numba/
“Much better! We’re now within about a factor of 3 of the Fortran speed, and we’re still writing pure Python!”
“I should emphasize here that I have years of experience with Cython, and in this function I’ve used every Cython optimization there is … By comparison, the Numba version is a simple, unadorned wrapper around plainly-written Python code.”
http://jakevdp.github.io/blog/2013/06/15/numba-vs-cython-take-2/
Numba is extremely simple to use. We just wrap our python function with autojit (JIT stands for “just in time” compilation) to automatically create an efficient, compiled version of the function
Adding this simple expression speeds up our execution by over a factor of over 1400! For those keeping track, this is about 50% faster than the version of Numba that I tested last August on the same machine.
The Cython version, despite all the optimization, is a few percent slower than the result of the simple Numba decorator!
http://stephanhoyer.com/2015/04/09/numba-vs-cython-how-to-choose/
“Using Numba is usually about as simple as adding a decorator to your functions”
“Numba is usually easier to write for the simple cases where it works”
https://murillogroupmsu.com/numba-versus-c/
“Numba allows for speedups comparable to most compiled languages with almost no effort”
“We find that Numba is more than 100 times as fast as basic Python for this application. In fact, using a straight conversion of the basic Python code to C++ is slower than Numba.”
In all cases where authors compared Numba to Cython for numeric code (Cython is probably the standard for these cases) Numba always performs as-well-or-better and is always much simpler to write.
Here is a code example from Jake’s second blogpost:
Example: Code Complexity
# From http://jakevdp.github.io/blog/2015/02/24/optimizing-python-with-numpy-and-numba/ # Numba # Cython import numpy as np import numpy as np import numba cimport cython from libc.math cimport sqrt @cython.boundscheck(False) @numba.jit @cython.wraparound(False) def pairwise_python(X): def pairwise_cython(double[:, ::1] X): M = X.shape[0] cdef int M = X.shape[0] N = X.shape[1] cdef int N = X.shape[1] cdef double tmp, d D = np.empty((M, M), dtype=np.float) cdef double[:, ::1] D = np.empty((M, M), dtype=np.float64) for i in range(M): for i in range(M): for j in range(M): for j in range(M): d = 0.0 d = 0.0 for k in range(N): for k in range(N): tmp = X[i, k] - X[j, k] tmp = X[i, k] - X[j, k] d += tmp * tmp d += tmp * tmp D[i, j] = np.sqrt(d) D[i, j] = sqrt(d) return D return np.asarray(D)
The algorithmic body of each function (the nested for loops) is identical.However the Cython code is more verbose with annotations,both at the function definition (which we would expect for any AOT compiler),but also within the body of the function for various utility variables.The Numba code is just straight Python + Numpy code.We could remove the @numba.jit decorator and step through our function with normal Python.
Example: Numpy Operations
Additionally Numba lets us use Numpy syntax directly in the function, so for example the following function is well accelerated by Numba, even though it already fits NumPy’s syntax well.
# from https://flothesof.github.io/optimizing-python-code-numpy-cython-pythran-numba.html @numba.jit def laplace_numba(image): """Laplace operator in NumPy for 2D images. Accelerated using numba.""" laplacian = ( image[:-2, 1:-1] + image[2:, 1:-1] + image[1:-1, :-2] + image[1:-1, 2:] - 4*image[1:-1, 1:-1]) thresh = np.abs(laplacian) > 0.05 return thresh
Mixing and matching Numpy-style with for-loop style is often helpful when writing complex numeric algorithms.
Benchmarks in the these blogposts show that Numba is both simpler to use and often as-fast-or-faster than more commonly used technologies like Cython.
Numba drawbacks
So, given these advantages why didn’t Jake’s original prophecy hold true?
I believe that there are three primary reasons why Numba has not been more widely adopted among other open source projects:
LLVM Dependency: Numba depends on LLVM, which was historically difficult to install without a system package manager (like apt-get, brew) or conda. Library authors are not willing to exclude users that use other packaging toolchains, particularly Python’s standard tool,pip.
Community Trust:Numba is largely developed within a single for-profit company (Anaconda Inc.) and its developers are not well known by other library maintainers.
Lack of Interpretability:Numba’s output, LLVM, is less well understood by the community than Cython’s output, C (discussed in original-author comments in the last section)
All three of these are excellent reasons to avoid adding a dependency. Technical excellence alone is insufficient, and must be considered alongside community and long-term maintenance concerns.
http://jakevdp.github.io/blog/2015/02/24/optimizing-python-with-numpy-and-numba/
how difficult will it be for your users to install, read, modify, and contribute to your code? In the long run, this may be much more important than shaving a few milliseconds off the execution time
http://stephanhoyer.com/2015/04/09/numba-vs-cython-how-to-choose/
Cython is easier to distribute than Numba, which makes it a better option for user facing libraries
The main issue is that it can be difficult to install Numba unless you use Conda, which is great tool, but not one everyone wants to use
Cython is also a more stable and mature platform, whereas the features and performance of Numba are still evolving
https://dionhaefner.github.io/2016/11/suck-less-scientific-python-part-2-efficient-number-crunching/
Numba still only supports a subset of the Python and NumPy capabilities - and speedups are not always that dramatic.
But Numba has evolved recently
LLVM
Numba now depends on the easier-to-install library, llvmlitewhich, as of a few months ago is pip installable with binary wheels on Windows, Mac, and Linux.The llvmlite package is still a heavy-ish runtime dependency (42MB),but that’s significantly less than large Cython libraries like Pandas or SciPy.
If your concern was about the average user’s inability to install Numba, then I think that this concern has been resolved.
Community
Numba has three community problems:
Development of Numba has traditionally happened within the closed walls of Anaconda Inc (formerly Continuum Analytics)
The Numba maintainers are not well known within the broader Python community
There used to be a proprietary version, Numba Pro
This combination strongly attached Numba’s image to Continuum’s for-profit ventures, making community-oriented software maintainers understandably wary of dependence, for fear that dependence on this library might be used for Continuum’s financial gain at the expense of community users.
Things have changed significantly.
Numba Pro was abolished years ago. The funding for the project today comes more often from Anaconda Inc. consulting revenue, hardware vendors looking to ensure that Python runs as efficiently as possible on their systems, and from generous donations from the Gordon and Betty Moore foundation to ensure that Numba serves the open source Python community.
Developers outside of Anaconda Inc. now have core commit access, which forces communication to happen in public channels, notably GitHub (which was standard before) and Gitter chat (which is relatively new).
The maintainers are still fairly relatively unknown within the broader community. This isn’t due to any sort of conspiracy, but is instead due more to shyness or having interests outside of OSS. Antoine, Siu, Stan, and Stuart are all considerate, funny, and clever fellows with strong enthusiasm for compilers, OSS, and performance. They are quite responsive on the Numba mailing list should you have any questions or concerns.
If your concern was about Numba trapping users into a for-profit mode,then that seems to have been resolved years ago.
If your concern is more about not knowing who is behind the project then I encourage you to reach out.I would be surprised if you don’t walk away pleased.
The Continued Cases Against Numba
For completeness, let’s list a number of reasons why it is still quite reasonable to avoid Numba today:
It isn’t a community standard
Numba hasn’t attracted a wide developer base (compilers are hard), and so is probably still dependent on financial support for paid developers
You want to speed up non-numeric code that includes classes, dicts, lists, etc. for which I need Cython or PyPy
You want to build a library that is useful outside of Python, and so plan to build most numeric algorithms on C/C++/Fortran
You prefer ahead-of-time compilation and want to avoid JIT times
While llvmliteis cheaper than LLVM, it’s still 50MB
Understanding the compiled results is hard, and you don’t have good familiarity with LLVM
Numba features we didn’t talk about
Multi-core parallelism
GPUs
Run-time Specialization to the CPU you’re running on
Easy to swap out for other JIT compilers, like PyPy, if they arise in the future
Update from the original blogpost authors
After writing the above I reached out both to Stan and Siu from Numba and to the original authors of the referenced blogposts to get some of their impressions now having the benefit of additional experience.
Here are a few choice responses:
Stan:
I think one of the biggest arguments against Numba still is time. Due to a massive rewrite of the code base, Numba, in its present form, is ~3 years old, which isn’t that old for a project like this. I think it took PyPy at least 5-7 years to reach a point where it was stable enough to really trust. Cython is 10 years old. People have good reason to be conservative with taking on new core dependencies.
Jake:
One thing I think would be worth fleshing-out a bit (you mention it in the final bullet list) is the fact that numba is kind of a black box from the perspective of the developer. My experience is that it works well for straightforward applications, but when it doesn’t work well it’s *extremelydifficult to diagnose what the problem might be.*Contrast that with Cython, where the html annotation output does wonders for understanding your bottlenecks both at a non-technical level (“this is dark yellow so I should do something different”) and a technical level (“let me look at the C code that was generated”). If there’s a similar tool for numba, I haven’t seen it.
Florian:
Elaborating on Jake’s answer, I completely agree that Cython’s annotation tool does wonders in terms of understanding your code. In fact, numba does possess this too, but as a command-line utility. I tried to demonstrate this in my blogpost, but exporting the CSS in the final HTML render kind of mangles my blog post so here’s a screenshot:This is a case wherejit(nopython=True)works, so there seems to be no coloring at all.
Florian also pointed to the SciPy 2017 tutorial by Gil Forsyth and Lorena Barba
Dion:
I hold Numba in high regard, and the speedups impress me every time. I use it quite often to optimize some bottlenecks in our production code or data analysis pipelines (unfortunately not open source). And I love how Numba makes some functions likescipy.optimize.minimizeorscipy.ndimage.generic_filterwell-usable with minimal effort.However, I would never use Numba to build larger systems, precisely for the reason Jake mentioned. Subjectively, Numba feels hard to debug, has cryptic error messages, and seemingly inconsistent behavior. It is not a “decorate and forget” solution; instead it always involves plenty of fiddling to get right.That being said, if I were to build some high-level scientific library à la Astropy with some few performance bottlenecks, I would definitely favor Numba over Cython (and if it’s just to spare myself the headache of getting a working C compiler on Windows).
Stephan:
I wonder if there are any examples of complex codebases (say >1000 LOC) using Numba. My sense is that this is where Numba’s limitations will start to become more evident, but maybe newer features like jitclass would make this feasible.
SciPy tutorial link
As a final take-away, you might want to follow Florian’s advice and watch Gil and Lorena’s tutorial here:
blog comments powered by Disqus
|
I'm currently struggling with implementing the Multiplicative Kalman Filter or Error State Kalman Filter as described by Landis Markley in Attitude Error Representations for Kalman Filtering. Sadly there is two versions of almost the same paper online:
First of all, I'm a little bit confused why eq. 45 in 1 is different from eq. 33 in 2. Which one is correct? Usually the covariance propagation is defined as $P = FPF^T + GQG^T$. What is the reason for using $P = FP + PF^T + GQG^T$?
In the paper Markley just filters the attitude error and the gyro drift. I would also like to have the angular velocity and acceleration as part of the state. If I understand it right, Markley integrates gyro measurements in the predict step. At the moment I'm trying to figure out the correct way of having the angular velocity as part of the state and then performing a separate update step for the gyroscope.
Currently I have the following code:
import numpy as np
class ESKF:
"""This is an Error State Kalman Filter (ESKF) as described by Markley
see https://www.researchgate.net/publication/245432681_Attitude_Error_Representations_for_Kalman_Filtering
"""
def __init__(self):
# 3 x attitude error, 3 x gyro bias, 3 x angular velocity
self.x = np.zeros(9)
# reference quaternion
self.qref = np.array([0.0, 0.0, 0.0, 1.0])
# state covariance
self.P = np.identity(9)
# process noise
self.Q = np.identity(9) * 0.01 # TODO should be determined by tests
# sensor noise
self.R_gyr = np.identity(3) * 0.01 # TODO should be determined for sensor
self.R_acc = np.identity(3) * 0.01 # TODO should be determined for sensor
self.R_mag = np.identity(3) * 0.01 # TODO should be determined for sensor
#TODO initialization
def predict(self, dt):
"""The ESKF predict step
Parameters
----------
dt : float
The time step in s
"""
# eq. 23
self.qref += 0.5 * quat_mult(np.array([self.x[6], self.x[7], self.x[8], 0]), self.qref) * dt
# normalize to compensate numerical errors
self.qref /= np.linalg.norm(self.qref)
# eq. 38
F = np.zeros((9,9))
# df/da
F[0:3,0:3] = - vec_to_cross_matrix(self.x[6:9])
# df/db
F[0:3,3:6] = - np.identity(3)
# df/dw
F[0:3,6:9] = vec_to_cross_matrix(self.x[0:3])
#eq. 39
G = np.zeros((9,9))
G[0:3,0:3] = -np.identity(3)
G[3:6,3:6] = np.identity(3)
G[6:9,6:9] = np.identity(3)
# eq. 33
self.P = F @ self.P + self.P @ F.T + G @ self.Q @ G.T
#self.P = F @ self.P @ F.T + G @ self.Q @ G.T
def update_gyro(self, z):
"""The ESKF update step for a gyrosope
Parameters
----------
z : array, shape [3]
Sensor measurement with structure [x, y, z]
"""
# Kalman Gain
# K = np.zeros((3,3))
H = np.zeros((3,6))
# expected measurement is angular velocity + gyro drift
H[0:3,0:3] = np.identity(3)
H[0:3,3:6] = np.identity(3)
# K = P * H' (H * P * H' + R)^-
K = self.P[3:9,3:9] @ H.T @ np.linalg.inv(H @ self.P[3:9,3:9] @ H.T + self.R_gyr)
# x = x + K * (z - H * x)
self.x[3:9] += K @ (z - H @ self.x[3:9])
# P = (I - KH)P
IKH = np.identity(6) - K @ H
self.P[3:9,3:9] = IKH @ self.P[3:9,3:9]
def update_acc(self, z):
"""The ESKF update step for an accelerometer
Parameters
----------
z : array, shape [3]
Sensor measurement with structure [x, y, z]
"""
vi = np.array([0, 0, -9.81]) # TODO + acc
# eq. 42
vb_pred = rotvec_to_mat(self.x[0:3]) @ quat_to_mat(self.qref) @ vi
# h(v) = v
h = vb_pred
# Ha
# eq. 44
Ha = vec_to_cross_matrix(vb_pred)
#eq. 46
K = self.P[0:6,0:3] @ Ha.T @ np.linalg.inv(Ha @ self.P[0:3,0:3] @ Ha.T + self.R_acc)
# eq. 47
self.x[0:6] += K @ (z - h - Ha @ self.x[0:3])
# eq. 48
self.P[0:6,0:6] -= K @ Ha @ self.P[0:3,0:6]
def update_mag(self, z, B, incl, W, V):
"""The ESKF update step for a magnetometer
see https://www.nxp.com/docs/en/application-note/AN4246.pdf
Parameters
----------
z : array, shape [3]
Sensor measurement with structure [x, y, z]
B : float
The geomagnetic field strength in gauss
incl : float
The inclination angle in rad
W : array, shape [3,3]
The soft-iron distortion
V : array, shape [3]
The hard-iron distortion
"""
vi = B * np.array([np.cos(incl), 0, -np.sin(incl)])
# eq. 42
vb_pred = rotvec_to_mat(self.x[0:3]) @ quat_to_mat(self.qref) @ vi
#h(v) = W * v + V
h = W @ vb_pred + V
# Ha
# eq. 44
Ha = W @ vec_to_cross_matrix(vb_pred)
#eq. 46
K = self.P[0:6,0:3] @ Ha.T @ np.linalg.inv(Ha @ self.P[0:3,0:3] @ Ha.T + self.R_mag)
# eq. 47
self.x[0:6] += K @ (z - h - Ha @ self.x[0:3])
# eq. 48
self.P[0:6,0:6] -= K @ Ha @ self.P[0:3,0:6]
def reset(self):
"""The ESKF reset step
"""
# eq. 14
self.qref = quat_mult(gibbs_vec_to_quat(self.x[0:3]), self.qref)
self.x[0:3] = np.zeros(3)
and some helpers defined the following:
def quat_mult(a, b):
"""Multiply 2 quaternions. They should have the structure [v1, v2, v3, w]
Parameters
----------
a : array, shape [4]
Quaternion 1
b : array, shape [4]
Quaternion 2
Returns
-------
q : array, shape [4]
Quaternion product of a and b
"""
# eq. 6
v = a[3] * b[0:3] + b[3] * a[0:3] - np.cross(a[0:3], b[0:3])
w = a[3] * b[3] - a[0:3].T @ b[0:3]
return np.array([ v[0], v[1], v[2], w ])
def vec_to_cross_matrix(a):
"""Constructs the skew symmetric cross product matrix of a
Parameters
----------
a : array, shape [3]
Vector
Returns
-------
M : array, shape [3,3]
Cross product matrix of a
"""
# eq. 5
return np.array([[0, -a[2], a[1]], [a[2], 0, -a[0]], [-a[1], a[0], 0]])
def quat_to_mat(a):
"""Converts a quaternion into a rotation matrix
Parameters
----------
a : array, shape [4]
Quaternion
Returns
-------
M : array, shape [3,3]
The rotation matrix of a
"""
# eq. 4
return (a[3]**2 - a[0]**2 - a[1]**2 - a[2]**2) * np.identity(3) - 2 * a[3] * vec_to_cross_matrix(a[0:3]) + 2 * a[0:3] @ a[0:3].T
def rotvec_to_mat(a):
"""Converts a rotation vector into a rotation matrix
Parameters
----------
a : array, shape [3]
Rotation vector
Returns
-------
M : array, shape [3,3]
The rotation matrix of a
"""
# eq. 20
a_norm_sqr = a[0]**2 + a[1]**2 + a[2]**2
return np.identity(3) - vec_to_cross_matrix(a) - 0.5 * (a_norm_sqr * np.identity(3) - a @ a.T)
def gibbs_vec_to_quat(a):
"""Converts a gibbs vector into a quaternion
Parameters
----------
a : array, shape [3]
Gibbs vector
Returns
-------
q : array, shape [4]
The quaternion of a with structure [v1, v2, v3, w]
"""
# eq. 18b
a_norm_sqr = a[0]**2 + a[1]**2 + a[2]**2
return (1 / np.sqrt(4 + a_norm_sqr)) * np.concatenate((a, [2]))
Obviously, the angular velocity and the gyro drift always have the same values. Is it worth to keep both in the state, or should I just abandon the gyro drift?
I also miss one step in the derivation of eq. 43 in 2. Let $A(a) = \{ I_{3 \times 3} - [\mathbf{a} \times] - \frac{1}{2} ( a^2 I_{3 \times 3} - \mathbf{a}\mathbf{a}^T ) \}$. Then the Taylor expansion is $h(v_B) = h(\bar{v}_B) + \frac{\delta h}{\delta v} |_{\bar{v}_B} (A(a) - I_{3 \times 3}) \bar{v}_B$ since $v_B = A(a)A(q_{ref})v_I = A(a)\bar{v}_B$. But how is this collapsed too just $[\mathbf{a} \times] \bar{v}_B$?
When I let $v_I = (\begin{smallmatrix} 0 & 0 & -9.81 \end{smallmatrix})^T + (\begin{smallmatrix} accx & accy & accz \end{smallmatrix})^T$ and $acc$ is part of my state, then I also have to include $\frac{\delta h}{\delta acc}$ to $H$ (eq. 45 in 2), right? Taking this derivative looks fairly complicated. Is there a smarter way doing it? EDIT: It's not, it's just $\frac{\delta h}{\delta v} A(a) A(q_{ref})$
Thanks in advance, Martin
|
View on TensorFlow.org Run in Google Colab View source on GitHub
Overview
Using the TensorBoard Embedding Projector, you can graphically represent high dimensional embeddings. This can be helpful in visualizing, examining, and understanding your embedding layers.
In this tutorial, you will learn how visualize this type of trained layer.
Setup
For this tutorial, we will be using TensorBoard to visualize an embedding layer generated for classifying movie review data.
try:
# %tensorflow_version only exists in Colab.
%tensorflow_version 2.x
except Exception:
pass
%load_ext tensorboard
import os
import tensorflow as tf
import tensorflow_datasets as tfds
from tensorboard.plugins import projector
IMDB Data
We will be using a dataset of 25,000 movies reviews from IMDB, labeled by sentiment (positive/negative). Reviews have been preprocessed, and each review is encoded as a sequence of word indexes (integers). For convenience, words are indexed by overall frequency in the dataset, so that for instance the integer "3" encodes the 3rd most frequent word in the data. This allows for quick filtering operations such as: "only consider the top 10,000 most common words, but eliminate the top 20 most common words".
As a convention, "0" does not stand for a specific word, but instead is used to encode any unknown word. Later in the tutorial, we will be removing this row from the visualization.
(train_data, test_data), info = tfds.load(
"imdb_reviews/subwords8k",
split=(tfds.Split.TRAIN, tfds.Split.TEST),
with_info=True,
as_supervised=True,
)
encoder = info.features["text"].encoder
# shuffle and pad the data.
train_batches = train_data.shuffle(1000).padded_batch(
10, padded_shapes=((None,), ())
)
test_batches = test_data.shuffle(1000).padded_batch(
10, padded_shapes=((None,), ())
)
train_batch, train_labels = next(iter(train_batches))
Keras Embedding Layer
A Keras Embedding Layer can be used to train an embedding for each word in your volcabulary. Each word (or sub-word in this case) will be associated with a 16-dimensional vector (or embedding) that will be trained by the model.
See this tutorial to learn more about word embeddings.
# Create an embedding layer
embedding_dim = 16
embedding = tf.keras.layers.Embedding(encoder.vocab_size, embedding_dim)
# Train this embedding as part of a keras model
model = tf.keras.Sequential(
[
embedding, # The embedding layer should be the first layer in a model.
tf.keras.layers.GlobalAveragePooling1D(),
tf.keras.layers.Dense(16, activation="relu"),
tf.keras.layers.Dense(1),
]
)
# Compile model
model.compile(
optimizer="adam",
loss=tf.keras.losses.BinaryCrossentropy(from_logits=True),
metrics=["accuracy"],
)
# Train model
history = model.fit(
train_batches, epochs=1, validation_data=test_batches, validation_steps=20
)
2500/2500 [==============================] - 13s 5ms/step - loss: 0.5330 - accuracy: 0.6769 - val_loss: 0.4043 - val_accuracy: 0.7800
Saving data for TensorBoard
TensorBoard reads tensors and metadata from your tensorflow projects from the logs in the specified log_dir directory. For this tutorial, we will be using /logs/imdb-example/.
In order to visualize this data, we will be saving a checkpoint to that directory, along with metadata to understand which layer to visualize.
# Set up a logs directory, so Tensorboard knows where to look for files
log_dir='/logs/imdb-example/'
if not os.path.exists(log_dir):
os.makedirs(log_dir)
# Save Labels separately on a line-by-line manner.
with open(os.path.join(log_dir, 'metadata.tsv'), "w") as f:
for subwords in encoder.subwords:
f.write("{}\n".format(subwords))
# Fill in the rest of the labels with "unknown"
for unknown in range(1, encoder.vocab_size - len(encoder.subwords)):
f.write("unknown #{}\n".format(unknown))
# Save the weights we want to analyse as a variable. Note that the first
# value represents any unknown word, which is not in the metadata, so
# we will remove that value.
weights = tf.Variable(model.layers[0].get_weights()[0][1:])
# Create a checkpoint from embedding, the filename and key are
# name of the tensor.
checkpoint = tf.train.Checkpoint(embedding=weights)
checkpoint.save(os.path.join(log_dir, "embedding.ckpt"))
# Set up config
config = projector.ProjectorConfig()
embedding = config.embeddings.add()
# The name of the tensor will be suffixed by `/.ATTRIBUTES/VARIABLE_VALUE`
embedding.tensor_name = "embedding/.ATTRIBUTES/VARIABLE_VALUE"
embedding.metadata_path = 'metadata.tsv'
projector.visualize_embeddings(log_dir, config)
%tensorboard --logdir /logs/imdb-example/
Analysis
The TensorBoard Projector is a great tool for analyzing your data and seeing embedding values relative to each other. The dashboard allows searching for specific terms, and highlights words that are nearby in the embedding space. From this example we can see that Wes Anderson and Alfred Hitchcock are both rather neutral terms, but that they are referenced in different contexts.
Hitchcock is closer associated to words like nightmare, which likely relates to his work in horror movies. While Anderson is closer to the word heart, reflecting his heartwarming style.
|
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209
# -*- coding: utf-8 -*-
#
# Configuration file for the Sphinx documentation builder.
#
# This file does only contain a selection of the most common options. For a
# full list see the documentation:
# http://www.sphinx-doc.org/en/master/config
# -- Path setup --------------------------------------------------------------
# If extensions (or modules to document with autodoc) are in another directory,
# add these directories to sys.path here. If the directory is relative to the
# documentation root, use os.path.abspath to make it absolute, like shown here.
#
import os
import sys
sys.path.insert(0, os.path.abspath('../../../ipfml'))
# -- Project information -----------------------------------------------------
project = 'ipfml'
copyright = '2019, Jérôme BUISINE'
author = 'Jérôme BUISINE'
# The short X.Y version
version = '0.4.4'
# The full version, including alpha/beta/rc tags
release = 'v0.4.4'
# -- General configuration ---------------------------------------------------
# If your documentation needs a minimal Sphinx version, state it here.
#
# needs_sphinx = '1.0'
# Add any Sphinx extension module names here, as strings. They can be
# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
# ones.
extensions = [
'sphinx.ext.autodoc',
'sphinx.ext.doctest',
'sphinx.ext.coverage',
'sphinx.ext.napoleon',
'sphinx.ext.autosummary',
'sphinx.ext.viewcode',
'sphinx.ext.coverage'
]
autosummary_generate = True
autodoc_default_flags = ['members']
autodoc_member_order = 'groupwise'
# Add any paths that contain templates here, relative to this directory.
templates_path = ['_templates']
# The suffix(es) of source filenames.
# You can specify multiple suffix as a list of string:
#
# source_suffix = ['.rst', '.md']
source_suffix = '.rst'
# The master toctree document.
master_doc = 'index'
# The language for content autogenerated by Sphinx. Refer to documentation
# for a list of supported languages.
#
# This is also used if you do content translation via gettext catalogs.
# Usually you set "language" from the command line for these cases.
language = None
# List of patterns, relative to source directory, that match files and
# directories to ignore when looking for source files.
# This pattern also affects html_static_path and html_extra_path.
exclude_patterns = []
# The name of the Pygments (syntax highlighting) style to use.
pygments_style = None
# -- Options for HTML output -------------------------------------------------
# The theme to use for HTML and HTML Help pages. See the documentation for
# a list of builtin themes.
#
html_theme = 'sphinx_rtd_theme'
# Theme options are theme-specific and customize the look and feel of a theme
# further. For a list of options available for each theme, see the
# documentation.
#
# html_theme_options = {}
html_theme_options = {
'canonical_url': '',
#'analytics_id': 'UA-XXXXXXX-1',
'logo_only': False,
'display_version': True,
'prev_next_buttons_location': 'bottom',
'style_external_links': False,
#'vcs_pageview_mode': '',
# Toc options
'collapse_navigation': True,
'sticky_navigation': True,
'navigation_depth': 4,
'includehidden': True,
'titles_only': False
}
html_context = {
'display_github': True,
'github_user': 'prise-3d',
'github_repo': 'ipfml',
'github_version': 'master/docs/source/'
}
# Add any paths that contain custom static files (such as style sheets) here,
# relative to this directory. They are copied after the builtin static files,
# so a file named "default.css" will overwrite the builtin "default.css".
html_static_path = ['_static']
# Custom sidebar templates, must be a dictionary that maps document names
# to template names.
#
# The default sidebars (for documents that don't match any pattern) are
# defined by theme itself. Builtin themes are using these templates by
# default: ``['localtoc.html', 'relations.html', 'sourcelink.html',
# 'searchbox.html']``.
#
# html_sidebars = {}
# -- Options for HTMLHelp output ---------------------------------------------
# Output file base name for HTML help builder.
htmlhelp_basename = 'ipfmldoc'
# -- Options for LaTeX output ------------------------------------------------
latex_elements = {
# The paper size ('letterpaper' or 'a4paper').
#
# 'papersize': 'letterpaper',
# The font size ('10pt', '11pt' or '12pt').
#
# 'pointsize': '10pt',
# Additional stuff for the LaTeX preamble.
#
# 'preamble': '',
# Latex figure (float) alignment
#
# 'figure_align': 'htbp',
}
# Grouping the document tree into LaTeX files. List of tuples
# (source start file, target name, title,
# author, documentclass [howto, manual, or own class]).
latex_documents = [
(master_doc, 'ipfml.tex', 'ipfml Documentation',
'Jérôme BUISINE', 'manual'),
]
# -- Options for manual page output ------------------------------------------
# One entry per manual page. List of tuples
# (source start file, name, description, authors, manual section).
man_pages = [
(master_doc, 'ipfml', 'ipfml Documentation',
[author], 1)
]
# -- Options for Texinfo output ----------------------------------------------
# Grouping the document tree into Texinfo files. List of tuples
# (source start file, target name, title, author,
# dir menu entry, description, category)
texinfo_documents = [
(master_doc, 'ipfml', 'ipfml Documentation',
author, 'ipfml', 'One line description of project.',
'Miscellaneous'),
]
# -- Options for Epub output -------------------------------------------------
# Bibliographic Dublin Core info.
epub_title = project
# The unique identifier of the text. This can be a ISBN number
# or the project homepage.
#
# epub_identifier = ''
# A unique identification for the text.
#
# epub_uid = ''
# A list of files that should not be packed into the epub file.
epub_exclude_files = ['search.html']
# -- Extension configuration -------------------------------------------------
|
Dec 182018
Amazon's qid in the url is the number of seconds since January 1st, 1979. They use it to detect whether a link is freshly searched thus signaling that that keyword is searched, or if its an old qid then that keyword was not used and someone just visited the link (I probably explained that like crap, but I've been coding for like 20 hours straight and my brain hurts). Here's the code:
import datetime
round((datetime.datetime.today()-datetime.datetime(1970,1,1)).total_seconds())
scrapy export to json:
scrapy crawl amazonproducts -o data3.json
Dec 182018
I'm trying to find a clever way to get a bunch of keywords in a specific niche and of course my first instinct is to scrape them. Getting the data was pretty easy actually. I just made a url generator with a bunch of keywords and imported the urls into a chrome extension web scraper (that way I could avoid having to use sessions in a scraper and this was way easier). Make sure to use the web scraper I linked here because the other one's are garbage. The only annoying thing is that the scraper doesn't have a good way to group content that came from the same parent div unless you scrape all of the content of that div, which is super messy. So once the scrape finishes I just copy the column with all of the data, paste it into a text file, and find replace tabs with nothing (delete all the TABS ARGHGHH). It will look something like this:
"ITP
Geekcreit DUE R3 32 Bit ARM Module With USB Cable Arduino Compatible
SKU: 906466
Price: $12.99
Est. $0.78 Per Sale
45 Day Cookie
BANGGOOD TECHNOLOGY CO., LIMITED
Merchant ID: 32599
www.banggood.com
30 day Average Commission: $2.93
30 day Average Sale Amount: $42.15
30 Day Average Reversal Rate: 2.45 %
30 Day Conversion Rate: 6.81%
Join Program
Show More Products
Add to Favorites"
"
Wooden Mixing Paddle, 42"" Length
SKU: 10106
Price: $13.60
Est. $0.78 Per Sale
30 Day Cookie
Kerekes kitchen & Restaurant Supplies
Merchant ID: 57964
www.BakeDeco.com
30 day Average Commission: $0.82
30 day Average Sale Amount: $140.17
30 Day Average Reversal Rate: 0.00 %
30 Day Conversion Rate: 10.32%
Join Program
Show More Products
Add to Favorites"
So I had to create a convert scraped function that basically looks for the line that starts with a ", but not double "" (some products have double quotes). Surprisingly, it worked perfectly with zero issues, but even if a few got mixed up on a product I made it to where it will resets after each product. Anyways, here's the code:
#!/usr/bin/env python
# -*- coding: utf-8 -*-
import random
import csv
cats = [x.rstrip() for x in open('categories.txt', 'r').readlines()]
filters = [x.rstrip() for x in open('filters.txt', 'r').readlines()]
types = 'productSearch', 'basicKeyword'
pages = list(range(1, 452, 50))
def url_gen(search_type, keyword, page_start, search_filter):
return ('https://account.shareasale.com/a-programs.cfm#searchType={search_type}&'
'keyword={keyword}&ascordesc=desc&start={page_start}&order={search_filter}'
.format(search_type=search_type, keyword=keyword, page_start=page_start, search_filter=search_filter))
def all_products(file_name):
urls = []
for cat in cats:
for search_filter in filters:
for page_start in pages:
urls.append(url_gen(types[0], cat, page_start, search_filter))
save_sitemap(create_sitemap(urls, file_name), file_name)
def create_sitemap(urls, file_name):
urls_string_list = []
count = 1
urls_string_list.append('[')
for url in urls:
if count < len(urls):
urls_string_list.append('"{url}",'.format(url=url))
count += 1
else:
urls_string_list.append('"{url}"]'.format(url=url))
urls_string = ''.join(urls_string_list)
return ('{{"_id":"{file_name}{random_int}","startUrl":{urls_string},"selectors":[{{"id":"name",'
'"type":"SelectorText","parentSelectors":["_root"],"selector":"div.mGeneral div.org",'
'"multiple":true,"regex":"","delay":0}},{{"id":"pnk","type":"SelectorText","parentSelectors":["_root"],'
'"selector":"div.org a","multiple":true,"regex":"","delay":0}},{{"id":"price","type":"SelectorText",'
'"parentSelectors":["_root"],"selector":"div.price","multiple":true,"regex":"","delay":0}},{{"id":"per sale",'
'"type":"SelectorText","parentSelectors":["_root"],"selector":"div.cookie","multiple":true,"regex":"","delay":0}}]}}'
.format(file_name=file_name, random_int=str(random.randint(1, 999)), urls_string=urls_string))
def save_sitemap(sitemap, file_name):
with open('./generated/{}-sitemap-{}.txt'.format(file_name, str(random.randint(1, 999))), 'w') as file:
file.write(sitemap)
print(file_name, 'saved in /generated')
def convert_scraped(file_name):
keys = ['title', 'sku', 'price', 'per_sale', 'cookie', 'company', 'merch_id',
'website', 'commission', 'sale_amount', 'reversal_rate', 'conversion_rate',
'join', 'more', 'add']
with open('./scraped/{file_name}.txt'.format(file_name=file_name), 'r') as f:
with open('data.csv', 'w', newline='') as csvf:
writer = csv.writer(csvf)
writer.writerow(i for i in keys)
count = 0
data = {}
for line in f.readlines():
count += 1
if line[0] == '\"' and line[1] != '\"':
count = 0
with open('data.csv', 'a', newline='') as csvf:
writer = csv.writer(csvf)
writer.writerow(data.values())
else:
data[keys[count - 1]] = line.rstrip()
print('Data written to data.csv')
if __name__ == '__main__':
# all_products('products')
convert_scraped('shareasale1-data')
|
Making your own programming language with Python
Making your own programming language with Python
Why make your own language?
When you write your own programming language, you control the entire programmer experience.
This allows you to shape exact how each aspect of your language works and how a developer interacts with it.
This allows you to make a language with things you like from other languages and none of the stuff you don't.
In addition, learning about programming language internals can help you better understand the internals of programming languages you use every day, which can make you a better programmer.
How programming languages work
Every programming language is different in the way it runs, but many consist of a couple fundamental steps: lexing and parsing.
Introduction to Lexing
Lexing is short for LEXical analysis.
The lex step is where the language takes the raw code you've written and converts it into an easily parsable structure.
This step interprets the syntax of your language and turns next into special symbols inside the language called tokens.
For example, let's say you have some code you want to parse. To keep it simple I'll use python-like syntax, but could be anything. It doesn't even have to be text.
# this is a commenta = (1 + 1)
A lexer to parse this code might do the following:
Discard all comments
Produce a token that represents a variable name
Produce left and right parenthesis tokens
Convert literals like numbers or strings to tokens
Produce tokens for nath operations like + - * /(and maybe bitwise/logical operators as well)
The lexer will take the raw code and interpret it into a list of tokens.
The lexer can also be used to insure that two pieces of code that may be different, like 1 + 1 and 1+1 are still parsed the same way.
For the code above, it might generate tokens like this:
NAME(a) EQUALS LPAREN NUMBER(1) PLUS NUMBER(2) RPAREN
Tokens can be in many forms, but the main idea here is that they are a standard and easy to parse way of representing the code.
Introduction to Parsing
The parser is the next step in the running of your language.
Now that the lexer has turned the text into consistent tokens, the parser simplifies and executes them.
Parser rules recognize a sequence of tokens and do something about them.
Let's look at a simple example for a parser with the same tokens as above.
A simple parser could just say:
If I see the GREETtoken and then aNAMEtoken, printHello,and the the name.
A more complicated parser aiming to parse the code above might have these rules, which we will explore later:
Try to classify as much code as possible as an expression. By "as much code as possible" I mean the parser will first try to consider a full mathematical operation as an expression, and then if that fails convert a single variable or number to an expression. This ensure that as much code as possible will be matched as an expression. The "expression" concept allows us to catch many patterns of tokens with one piece of code. We will use the expression in the next step.
Now that we have a concept of an expression, we can tell the parser that if it sees the tokens NAME EQUALSand then an expression, that means a variable is being assigned.
Using PLY to write your language
What is PLY?
Now that we know the basics of lexing and parsing, lets start writing some python code to do it.
PLY stands for Python Lex Yacc.
It is a library you can use to make your own programming language with python.
Lex is a well known library for writing lexers.
Yacc stands for "Yet Another Compiler Compiler" which means it compiles new languages, which are compilers themself.
This tutorial is a short example, but the PLY documentation is an amazing resource with tons of examples. I would highly recommend that you check it out if you are using PLY.
For this example, we are going to be building a simple calculator with variables. If you want to see the fully completed example, you can fork this repl: [TODO!!]
Lexing with PLY lex
Lexer tokens
Lets start our example! Fire up a new python repl and follow along with the code samples.
To start off, we need to import PLY:
from ply import lex, yacc
Now let's define our first token. PLY requires you to have a tokens list which contains every token the lexer can produce. Let's define our first token, PLUS for the plus sign:
tokens = [
'PLUS',
]
t_PLUS = r'\+'
A string that looks like r'' is special in python. The r prefix means "raw" which includes backslashes in the string. For example, to make define the string \+ in python, you could either do '\\+' or r'\+'. We are going to be using a lot of backslashes, so raw strings make things a lot easier.
But what does \+ mean?
Well in the lexer, tokens are mainly parsed using regexes.
A regex is like a special programming language specifically for matching patterns in text.
A great resource for regexes is regex101.com where you can test your regexes with syntax highlighting and see explanations of each part.
I'm going to explain the regexes included in this tutorial, but if you want to learn more you can play around with regex101 or read one of the many good regex tutorials on the internet.
The regex \+ means "match a single character +".
We have to put a backshlash before it because + normally has a special meaning in regex so we have to "escape" it to show we want to match a + literally.
We are also required to define a function that runs when the lexer encounters an error:
def t_error(t):
print(f"Illegal character {t.value[0]!r}")
t.lexer.skip(1)
This function just prints out a warning when it hits a character it doesn't recognize and then skips it (the !r means repr so it will print out quotes around the character).
You can change this to be whatever you want in your language though.
Optionally, you can define a newline token which isn't produced in the output of the lexer, but keeps track of each line.
def t_newline(t):
r'\n+'
t.lexer.lineno += len(t.value)
Since this token is a function, we can define the regex in docstring of the function instead.
The function takes a paramater t, which is a special object representing the match that the lexer found. We can access the lexer using the the t.lexer attribute.
This function matches at least one newline character and then increases the line number by the amount that it sees. This allows the lexer to known what line number its on at all times using the lexer.lineno variable.
Now we can use the line number in our error function:
def t_error(t):
print(f"Illegal character {t.value[0]!r} on line {t.lexer.lineno}")
t.lexer.skip(1)
Let's test out the lexer!
This is just some temporary code, you don't have to know what this code does, because once we implement a parser, the parser will run the lexer for you.
lexer = lex.lex()
lexer.input('+')
for token in lexer:
print(token)
Play around with the value passed to lex.input.
You should notice that any character other than a plus sign makes the error message print out, but doesn't crash the program.
In your language, you can make it gracefully ignore lex errors like this or make it stop running by editing the t_error function.
If you add more lines to the input string, the line number in the error message should change.
More complicated tokens
Let's delete the test token add some more complicated tokens.
Replace your tokens list and the t_PLUS line with the following code:
reserved_tokens = {
'greet': 'GREET'
}
tokens = list(reserved_tokens.values()) + [
'SPACE'
]
t_SPACE = r'[ ]'
def t_ID(t):
r'[a-zA-Z_][a-zA-Z0-9_]*'
if t.value in reserved_tokens:
t.type = reserved_tokens[t.value]
else:
t.type = 'NAME'
return t
Let's explore the regex we have in the t_ID function.
This regex is more complicated that the simple ones we've used before.
First, we have [a-zA-Z_]. This is a character class in regex. It means, match any lowercase letter, uppercase letter, or underscore.
Next we have [a-zA-Z0-9_]. This is the same as above except numbers are also included.
Finally, we have *. This means "repeat the previous group or class zero to unlimited times".
Why do we structure the regex like this?
Having two separate classes makes sure that the first one must match for it to be a valid variable.
If we exclude numbers from the first class, it not only doesn't match just regular numbers, but makes sure you can't start a variable with a number.
You can still have numbers in the variable name, because they are matched by the second class of the regex.
In the code, we first have a dictionary of reserved names.
This is a mapping of patterns to the token type that they should be.
The only one we have says that greet should be mapped to the GREET token.
The code that sets up the tokens list takes all of the possible reserved token values, in this is example its just ['GREET'] and adds on ['SPACE'], giving us ['GREET', 'SPACE'] automatically!
But why do we have to do this? Couldn't we just use something like the following code?
# Don't use this code! It doesn't work!
t_GREET = r'greet'
t_SPACE = r'[ ]'
t_NAME = r'[a-zA-Z_][a-zA-Z0-9_]*'
Actually, if we used that code, greet would never be matched! The lexer would match it with the NAME token. In order to avoid this, we define a new type of token which is a function. This function has the regex as its docstring and is passed a t paramater. This paramater has a value attribute which is the pattern matched.
The code inside this function simply checks if this value is one of the special reserved names we defined before. If it is, we set the special type attribute of the t paramter. This type controls the type of token which is produced from the pattern. When it sees the name greet, it will see greet is in the reserved names dictionary and produce a token type of GREET because that is the corresponding value in the dictionary. Otherwise, it will produce a NAME token because this is a regular variable.
This allows you to add more reserved terms easily later, its as simple as adding a value to the dictionary.
If needed, you could make also make the key of the reserved names dicitonary a regex and the match each regex against t.value in the function.
If you want to change these rules for your language, feel free!
Parsing with PLY yacc
Fair warning: Yacc can sometimes be hard to use and debug, every if you know python well.
Keep in mind, you don't have to use both lex and yacc, if you want you can just use lex and then write your own code to parse the tokens.
With that said lets get started.
Yacc basics
Before we get started, delete the lexer testing code (everything from lexer.input onward).
When we run the parser, the lexer is automatially run.
Let's add our first parser rule!
def p_hello(t):
'statement : GREET SPACE NAME'
print(list(t))
print(f"Hello, {t[3]}")
Let's break this down.
Again, we have information on the rule in the docstring.
This information is called a BNF Grammar. A statement in BNF Grammar consists of a grammar rule known as a non-terminal and terminals.
In the example above, statement is the non-terminal and GREET SPACE NAME are terminals.
The left-hand side describes what is produced by the rule, and the right-hand side describes what matches the rule.
The right hand side can also have non-terminals in it, just be careful to avoid infinite loops.
Basically, the yacc parser works by pushing tokens onto a stack, and looking at the current stack and the next token and seeing if they match any rules that it can use to simplify them. Here is a more in-depth explanation and example.
Before the above example can run, we still have to add some more code.
Just like for the lexer, the error handler is required:
def p_error(t):
if t is None: # lexer error, already handled
return
print(f"Syntax Error: {t.value!r}")
Now let's create and run the parser:
parser = yacc.yacc()
parser.parse('greet replit')
If you run this code you should see:
[None, 'greet', ' ', 'replit']
Hello, replit
The first line is the list version of the object passed to the parser function.
The first value is the statement that will be produced from the function, so it is None.
Next, we have the values of the tokens we specified in the rule.
This is where the t[3] part comes from. This is the third item in the array, which is the NAME token, so our parser prints out Hello, replit!
Note: Creating the parser tables is a relatively expensive operation, so the parser creates a file called
parsetab.pywhich it can load the parse tables from if they haven't changed.
You can change this filename by passing a kwarg into theyaccinitialization, likeparser = yacc.yacc(tabmodule='fooparsetab')
More complicated parsing: Calculator
This example is different from our running example, so I will just show a full code example and explain it.
from ply import lex, yacc
tokens = (
'NUMBER',
'PLUS', 'MINUS', 'TIMES', 'DIVIDE',
'LPAREN', 'RPAREN',
)
t_PLUS = r'\+'
t_MINUS = r'-'
t_TIMES = r'\*'
t_DIVIDE = r'/'
t_LPAREN = r'\('
t_RPAREN = r'\)'
def t_NUMBER(t):
r'\d+'
try:
t.value = int(t.value)
except ValueError:
print(f"Integer value too large: {t.value}")
t.value = 0
return t
def t_newline(t):
r'\n+'
t.lexer.lineno += len(t.value)
def t_error(t):
print(f"Illegal character {t.value[0]!r} on line {t.lexer.lineno}")
t.lexer.skip(1)
t_ignore = ' \t'
lexer = lex.lex()
# Parsing
def p_expression_binop(t):
'''expression : expression PLUS expression
| expression MINUS expression
| expression TIMES expression
| expression DIVIDE expression'''
if t[2] == '+' : t[0] = t[1] + t[3]
elif t[2] == '-': t[0] = t[1] - t[3]
elif t[2] == '*': t[0] = t[1] * t[3]
elif t[2] == '/': t[0] = t[1] / t[3]
def p_expression_group(t):
'expression : LPAREN expression RPAREN'
t[0] = t[2]
def p_expression_number(t):
'expression : NUMBER'
t[0] = t[1]
def p_error(t):
if t is None: # lexer error
return
print(f"Syntax Error: {t.value!r}")
parser = yacc.yacc()
if __name__ == "__main__":
while True:
inp = input("> ")
print(parser.parse(inp))
First we start off with the tokens: numbers, mathematical operations, and parenthesis.
You might notice that I didn't use the reserved_tokens trick, but you can implement it if you want.
Next we have a simple number token which matches 0-9 with \d+ and then converts its value from a string to an integer.
The next code we haven't used before is t_ignore.
This variable represents a list of all characters the lexer should ignore, which is \t which means spaces and tabs.
When the lexer sees these, it will just skip them. This allows users to add spaces without it affecting the lexer.
Now we have 3 parser directives.
The first is a large one, producing an expression from 4 possible input values, one for each math operation.
Each input has an expression on either side of the math operator.
Inside this directive, we have some (pretty ugly) code that performs the correct operation based on the operation token given.
If you want to make this prettier, consider a dictionary using the python stdlib operator module.
Next, we define an expression with parenthesis around it as being the same as the expression inside.
This makes parenthesis value be substituted in for them, making them evaluate inside first.
With very little code we created a very complicated rule that can deal with nested parenthesis correctly.
Finally, we define a number as being able to be an expression, which allows a number to be used as one of the expressions in rule 1.
For a challenge, try adding variables into this calculator!
You should be able to set variables by using syntax likevarname = any_expressionand you should be able to use variables in expressions.
If you're stuck, see one solution from the PLY docs.
Thats it!
Thanks for reading! If you have questions, feel free to ask on the Replit discord's #help-and-reviews channel, or just the comments.
Have fun!
|
note:文字列の芸術は、SEフォントの奇妙さのためにここでひどく見えるかもしれません:P:(
矩形の角を表す4つのタプルのリストが与えられた場合、その順序で互いに半透明の四角形を描きます。
この課題では、左上隅に最小の座標を、X軸を右に、y軸を下に向かって大きくする必要があります。
(x0、y0、x1、y1)または(x0、x1、y0、y1)は、
>四角形の左上と右下隅の座標ペア(四角形の2つの形式のどちらかを選択できますが、一貫性が必要です)
「半透明の四角形」とはどういう意味ですか?さて、この挑戦のために、あなたはスペース文字と、ほとんどのボックス描画文字;具体的には、「太字」の文字を含む矩形を描画するために使用されたすべてのもの。半透明の四角形が描画されると、最初に占有されていたスペースの細い線はすべて消え、太い線はすべて薄くなり、矩形自体が太線で描かれます。
たとえば、四角形を左上に、次に右下に描画すると、次のようになります。
┏━━━━┓┃ ┃┃ ┃┃ ┏━━╇━━┓┃ ┃ │ ┃┗━╉──┘ ┃ ┃ ┃ ┃ ┃ ┗━━━━━┛
To be clear, lines are lightened (bold -> thin -> none)
for all lines strictly within the rectangle (for example, downwards
facing lines are affected for the top edge but not the bottom
edge).
テストケース
テストケースごとに、いくつかの入力行が与えられ、続いてunicode-artが与えられます。
0 0 5 55 5 10 103 3 7 72 2 8 8┏━━━━┓ ┃ ┃ ┃ ┏━━╇━━┓ ┃ ┃┌─┴─┐┃ ┃ ┃│ │┃ ┗━╉┤ ├╊━┓ ┃│ │┃ ┃ ┃└─┬─┘┃ ┃ ┗━━╈━━┛ ┃ ┃ ┃ ┗━━━━┛14 5 15 913 2 15 166 4 15 11 ┏━┓ ┃ ┃ ┏━━━━━━╇━┫ ┃ │ ┃ ┃ │ ┃ ┃ │ ┃ ┃ │ ┃ ┃ │ ┃ ┃ │ ┃ ┗━━━━━━╈━┫ ┃ ┃ ┃ ┃ ┃ ┃ ┃ ┃ ┗━┛6 8 10 1115 12 16 1614 10 16 169 1 15 15 ┏━━━━━┓ ┃ ┃ ┃ ┃ ┃ ┃ ┃ ┃ ┃ ┃ ┃ ┃ ┏━━╉┐ ┃ ┃ ┃│ ┃ ┃ ┃│ ┌╊┓ ┗━━╉┘ │┃┃ ┃ │┠┨ ┃ │┃┃ ┃ │┃┃ ┗━━━━╈┩┃ ┗┷┛
ルール
4タプルのリストの入力には、合理的な形式で入力できます。左上隅は(0、0)、(0,1)のいずれかになります。
(1、0)、(1、1)
出力は、説明したようにUnicode-Artでなければなりません。出力は改行改行を持たないかもしれませんし、行末改行(最後の行の後に)をたかだか1つ持つことができます。末尾の空白は無視されます。
コードポイント
太くて明るい水平と垂直のパイプは、 [U + 2500、U + 2503]
の範囲にあります。さまざまなコーナーパイプは [U + 250C、U + 251C] の範囲にあります。
3本アームパイプは [U + 251C、U + 253C] の範囲にあります。 4アームパイプは
[U + 253C、U + 254C]
の範囲にあります。私のプログラムで見つかる可能性のある残りのパイプは決して実際に使用されません。
Python 3, 289 286 bytes
l,u=eval(input())
*_,w,h=map(max,zip(*l))
r=[*map(list,[' '*-~w]*-~h)]
R=range
for x,y,X,Y in l:
for i in R(x,X+1):
for j in R(y,Y+1):Q=i
Takes input as a list of 4-tuples: (x0, y0, x1,y1), along with the pipedrawing characters as follows:"╶╺╵└┕╹┖┗╴─╼┘┴┶┚┸┺╸╾━┙┵┷┛┹┻╷┌┍│├┝╿┞┡┐┬┮┤┼┾┦╀╄┑┭┯┥┽┿┩╃╇╻┎┏╽┟┢┃┠┣┒┰┲┧╁╆┨╂╊┓┱┳┪╅╈┫╉╋"
幅または高さがゼロのボックスをサポートします(すべてのボックス描画文字を使用します)。
‘Ungolfed’:
u=" ╶╺╵└┕╹┖┗╴─╼┘┴┶┚┸┺╸╾━┙┵┷┛┹┻╷┌┍│├┝╿┞┡┐┬┮┤┼┾┦╀╄┑┭┯┥┽┿┩╃╇╻┎┏╽┟┢┃┠┣┒┰┲┧╁╆┨╂╊┓┱┳┪╅╈┫╉╋" #Create array of spaces: l=eval(input()) w,h=list(map(max,zip(*l)))[2:] r=[[' ']*w for _ in' '*h] for x,y,X,Y in l: n,m=X-1,Y-1 for i in range(x,X): for j in range(y,Y): A,B=j in(y,m),i in(x,n) P=(i
|
```(aka backtick or grave accent) in a single line before and after the block. See: http://commonmark.org/help/
Use function that takes pandas series as argument i __init__ part
Mislav Å agovaclast edited by
Hello,
First, I want to say I read a plenty of docs (no all, there lot of stuff:)) and successfully implemented some simple strategies like buy if price is above VIX, vice versa.
Now I wanted to backtest my ML strategy with backtrader.
In nutshell, I have saved sklearn model in pkl format. I want to make predictions using this model every "event" and buy if the prediction is 1, and sell if -1. If position is already 1, than keep holding, vice versa.
Now I don't want to make predictions every minute. On contrary, I want to make prediction only when event happens. The event is calculated using CUSUM filter. In pandas data.frame world, I would calculate trading events using function from mlfinlab package:
# Compute volatility
daily_vol = mlfinlab.util.get_daily_vol(
close,
lookback=self.volatility_lookback)
But to calculate daily volatility using this function we have to set pandas serires (close prices) as frist argument.
My first naive approach to implement this in backtrader was to define daily_vol as I would do that in pandas world:
class RandomForestStrategy(bt.Strategy):
params = (
('volatility_scaler', 1),
('volatility_lookback', 50)
)
def start(self):
# get started value
self.val_start = self.broker.get_cash() # keep the starting cash
self.log(f"Type: {type(self.datas[0].close)}")
def log(self, txt, dt=None):
''' Logging function for this strategy'''
dt = dt or self.datas[0].datetime.datetime(0)
print(f'{dt.isoformat()}, {txt}')
def __init__(self):
# Keep a reference to the "close" line in the data[0] dataseries
self.dataclose = self.datas[0].close
# load ml model
clf = joblib.load("C:/Users/Mislav/Documents/GitHub/trademl/trademl/modeling/rf_model.pkl")
# Compute volatility get CUSUM events
self.daily_vol = ml.util.get_daily_vol(
self.datas[0].close,
lookback=self.params.volatility_lookback)
# self.cusum_events = ml.filters.cusum_filter(
# self.dataclose,
# threshold=self.daily_vol.mean()*self.params.volatility_scaler)
# To keep track of pending orders and buy price/commission
self.order = None
self.buyprice = None
self.buycomm = None
There is also next part but not important for now. Backtrader can calculate daily volatility because
self.datas[0].closeis line buffer object, not pandas series (expected).
My question is, is it possible to somehow provide pandas series as argument to
ml.util.get_daily_volor I have to rewrite the original function from mlfinlab?
Mislav Å agovaclast edited by
It seems to me that all backtesting framework I tried don't have some easy way to include ML models inside them. It's not rare that we need hundreds of variables in ML, based on OHLC data. I don't understand how to construct this data inside backtrader. I know I can write function for generating every indicator by hand, but that's probably the last option. I have function that generate fatures on padans df using talib package and some my own calculations. Nut I cant just call this function in backtrader.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.